00:00:00.000 Started by upstream project "autotest-per-patch" build number 124219 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.147 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.148 The recommended git tool is: git 00:00:00.148 using credential 00000000-0000-0000-0000-000000000002 00:00:00.150 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.204 Fetching changes from the remote Git repository 00:00:00.206 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.257 Using shallow fetch with depth 1 00:00:00.257 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.257 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.561 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.573 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.586 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:07.586 > git config core.sparsecheckout # timeout=10 00:00:07.599 > git read-tree -mu HEAD # timeout=10 00:00:07.615 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:07.640 Commit message: "pool: fixes for VisualBuild class" 00:00:07.640 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:07.778 [Pipeline] Start of Pipeline 00:00:07.797 [Pipeline] library 00:00:07.799 Loading library shm_lib@master 00:00:07.799 Library shm_lib@master is cached. Copying from home. 00:00:07.822 [Pipeline] node 00:00:07.838 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.840 [Pipeline] { 00:00:07.851 [Pipeline] catchError 00:00:07.853 [Pipeline] { 00:00:07.867 [Pipeline] wrap 00:00:07.879 [Pipeline] { 00:00:07.887 [Pipeline] stage 00:00:07.889 [Pipeline] { (Prologue) 00:00:08.111 [Pipeline] sh 00:00:08.398 + logger -p user.info -t JENKINS-CI 00:00:08.422 [Pipeline] echo 00:00:08.424 Node: CYP9 00:00:08.433 [Pipeline] sh 00:00:08.734 [Pipeline] setCustomBuildProperty 00:00:08.745 [Pipeline] echo 00:00:08.746 Cleanup processes 00:00:08.751 [Pipeline] sh 00:00:09.034 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.035 2690204 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.051 [Pipeline] sh 00:00:09.340 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.340 ++ grep -v 'sudo pgrep' 00:00:09.340 ++ awk '{print $1}' 00:00:09.340 + sudo kill -9 00:00:09.340 + true 00:00:09.355 [Pipeline] cleanWs 00:00:09.365 [WS-CLEANUP] Deleting project workspace... 00:00:09.365 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.373 [WS-CLEANUP] done 00:00:09.377 [Pipeline] setCustomBuildProperty 00:00:09.390 [Pipeline] sh 00:00:09.678 + sudo git config --global --replace-all safe.directory '*' 00:00:09.750 [Pipeline] nodesByLabel 00:00:09.751 Found a total of 2 nodes with the 'sorcerer' label 00:00:09.760 [Pipeline] httpRequest 00:00:09.765 HttpMethod: GET 00:00:09.765 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.771 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.796 Response Code: HTTP/1.1 200 OK 00:00:09.797 Success: Status code 200 is in the accepted range: 200,404 00:00:09.798 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:13.539 [Pipeline] sh 00:00:13.826 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:13.846 [Pipeline] httpRequest 00:00:13.852 HttpMethod: GET 00:00:13.852 URL: http://10.211.164.101/packages/spdk_28a75b1f35aee1695127449180a47c7bca8d93e3.tar.gz 00:00:13.853 Sending request to url: http://10.211.164.101/packages/spdk_28a75b1f35aee1695127449180a47c7bca8d93e3.tar.gz 00:00:13.873 Response Code: HTTP/1.1 200 OK 00:00:13.874 Success: Status code 200 is in the accepted range: 200,404 00:00:13.874 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_28a75b1f35aee1695127449180a47c7bca8d93e3.tar.gz 00:01:08.624 [Pipeline] sh 00:01:08.957 + tar --no-same-owner -xf spdk_28a75b1f35aee1695127449180a47c7bca8d93e3.tar.gz 00:01:11.509 [Pipeline] sh 00:01:11.796 + git -C spdk log --oneline -n5 00:01:11.796 28a75b1f3 pkgdep/helpers: Move helper functions to dedicated helpers.sh 00:01:11.796 5b4cf6db0 nvme/tcp: allocate nvme_tcp_req aligned to a cache line 00:01:11.796 c69768bd4 nvmf: add more debug logs related to cntlid and qid 00:01:11.796 7d5421b64 test/cuse: active namespaces were tested incorrectly 00:01:11.796 344c65257 nvmf/auth: add dhvlen check 00:01:11.808 [Pipeline] } 00:01:11.827 [Pipeline] // stage 00:01:11.836 [Pipeline] stage 00:01:11.839 [Pipeline] { (Prepare) 00:01:11.906 [Pipeline] writeFile 00:01:11.923 [Pipeline] sh 00:01:12.209 + logger -p user.info -t JENKINS-CI 00:01:12.222 [Pipeline] sh 00:01:12.508 + logger -p user.info -t JENKINS-CI 00:01:12.583 [Pipeline] sh 00:01:12.869 + cat autorun-spdk.conf 00:01:12.869 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.869 SPDK_TEST_NVMF=1 00:01:12.869 SPDK_TEST_NVME_CLI=1 00:01:12.869 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.869 SPDK_TEST_NVMF_NICS=e810 00:01:12.869 SPDK_TEST_VFIOUSER=1 00:01:12.869 SPDK_RUN_UBSAN=1 00:01:12.869 NET_TYPE=phy 00:01:12.876 RUN_NIGHTLY=0 00:01:12.879 [Pipeline] readFile 00:01:12.898 [Pipeline] withEnv 00:01:12.899 [Pipeline] { 00:01:12.934 [Pipeline] sh 00:01:13.219 + set -ex 00:01:13.219 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.219 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.219 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.220 ++ SPDK_TEST_NVMF=1 00:01:13.220 ++ SPDK_TEST_NVME_CLI=1 00:01:13.220 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.220 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.220 ++ SPDK_TEST_VFIOUSER=1 00:01:13.220 ++ SPDK_RUN_UBSAN=1 00:01:13.220 ++ NET_TYPE=phy 00:01:13.220 ++ RUN_NIGHTLY=0 00:01:13.220 + case $SPDK_TEST_NVMF_NICS in 00:01:13.220 + DRIVERS=ice 00:01:13.220 + [[ tcp == \r\d\m\a ]] 00:01:13.220 + [[ -n ice ]] 00:01:13.220 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.220 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.800 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.801 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.801 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.801 + true 00:01:19.801 + for D in $DRIVERS 00:01:19.801 + sudo modprobe ice 00:01:19.801 + exit 0 00:01:19.811 [Pipeline] } 00:01:19.826 [Pipeline] // withEnv 00:01:19.830 [Pipeline] } 00:01:19.849 [Pipeline] // stage 00:01:19.859 [Pipeline] catchError 00:01:19.861 [Pipeline] { 00:01:19.875 [Pipeline] timeout 00:01:19.876 Timeout set to expire in 50 min 00:01:19.877 [Pipeline] { 00:01:19.895 [Pipeline] stage 00:01:19.898 [Pipeline] { (Tests) 00:01:19.917 [Pipeline] sh 00:01:20.199 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.199 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.199 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.199 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.199 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.199 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.199 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.199 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.199 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.199 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.199 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.199 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.199 + source /etc/os-release 00:01:20.199 ++ NAME='Fedora Linux' 00:01:20.199 ++ VERSION='38 (Cloud Edition)' 00:01:20.199 ++ ID=fedora 00:01:20.199 ++ VERSION_ID=38 00:01:20.199 ++ VERSION_CODENAME= 00:01:20.199 ++ PLATFORM_ID=platform:f38 00:01:20.199 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.199 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.199 ++ LOGO=fedora-logo-icon 00:01:20.199 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.199 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.199 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.199 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.199 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.199 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.199 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.199 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.199 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.199 ++ SUPPORT_END=2024-05-14 00:01:20.199 ++ VARIANT='Cloud Edition' 00:01:20.199 ++ VARIANT_ID=cloud 00:01:20.199 + uname -a 00:01:20.199 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.199 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.502 Hugepages 00:01:23.502 node hugesize free / total 00:01:23.502 node0 1048576kB 0 / 0 00:01:23.502 node0 2048kB 0 / 0 00:01:23.502 node1 1048576kB 0 / 0 00:01:23.502 node1 2048kB 0 / 0 00:01:23.502 00:01:23.502 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.502 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:23.502 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:23.502 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:23.502 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:23.502 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:23.502 + rm -f /tmp/spdk-ld-path 00:01:23.502 + source autorun-spdk.conf 00:01:23.502 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.502 ++ SPDK_TEST_NVMF=1 00:01:23.502 ++ SPDK_TEST_NVME_CLI=1 00:01:23.502 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.502 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.502 ++ SPDK_TEST_VFIOUSER=1 00:01:23.502 ++ SPDK_RUN_UBSAN=1 00:01:23.502 ++ NET_TYPE=phy 00:01:23.502 ++ RUN_NIGHTLY=0 00:01:23.502 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.502 + [[ -n '' ]] 00:01:23.502 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.502 + for M in /var/spdk/build-*-manifest.txt 00:01:23.502 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.502 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.502 + for M in /var/spdk/build-*-manifest.txt 00:01:23.502 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.502 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.502 ++ uname 00:01:23.502 + [[ Linux == \L\i\n\u\x ]] 00:01:23.502 + sudo dmesg -T 00:01:23.502 + sudo dmesg --clear 00:01:23.502 + dmesg_pid=2691766 00:01:23.502 + [[ Fedora Linux == FreeBSD ]] 00:01:23.502 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.502 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.502 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.502 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.502 + sudo dmesg -Tw 00:01:23.502 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.502 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.502 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.502 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.502 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.502 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.502 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.502 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.502 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.502 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.502 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.502 Test configuration: 00:01:23.502 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.502 SPDK_TEST_NVMF=1 00:01:23.502 SPDK_TEST_NVME_CLI=1 00:01:23.502 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.502 SPDK_TEST_NVMF_NICS=e810 00:01:23.502 SPDK_TEST_VFIOUSER=1 00:01:23.502 SPDK_RUN_UBSAN=1 00:01:23.502 NET_TYPE=phy 00:01:23.502 RUN_NIGHTLY=0 14:10:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.502 14:10:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.502 14:10:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.502 14:10:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.502 14:10:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.502 14:10:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.502 14:10:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.502 14:10:00 -- paths/export.sh@5 -- $ export PATH 00:01:23.502 14:10:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.502 14:10:00 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.502 14:10:00 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:23.502 14:10:00 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718021400.XXXXXX 00:01:23.502 14:10:00 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718021400.HSxi0o 00:01:23.502 14:10:00 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:23.502 14:10:00 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:23.502 14:10:00 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.502 14:10:00 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.502 14:10:00 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.502 14:10:00 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:23.502 14:10:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:23.502 14:10:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.502 14:10:00 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.502 14:10:00 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:23.502 14:10:00 -- pm/common@17 -- $ local monitor 00:01:23.502 14:10:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.502 14:10:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.502 14:10:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.502 14:10:00 -- pm/common@21 -- $ date +%s 00:01:23.502 14:10:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.502 14:10:00 -- pm/common@21 -- $ date +%s 00:01:23.502 14:10:00 -- pm/common@25 -- $ sleep 1 00:01:23.502 14:10:00 -- pm/common@21 -- $ date +%s 00:01:23.502 14:10:00 -- pm/common@21 -- $ date +%s 00:01:23.502 14:10:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718021400 00:01:23.502 14:10:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718021400 00:01:23.502 14:10:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718021400 00:01:23.502 14:10:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718021400 00:01:23.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718021400_collect-vmstat.pm.log 00:01:23.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718021400_collect-cpu-load.pm.log 00:01:23.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718021400_collect-cpu-temp.pm.log 00:01:23.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718021400_collect-bmc-pm.bmc.pm.log 00:01:24.445 14:10:01 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:24.445 14:10:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.445 14:10:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.445 14:10:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.445 14:10:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.445 Mon Jun 10 12:10:01 PM UTC 2024 00:01:24.445 14:10:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.445 v24.09-pre-59-g28a75b1f3 00:01:24.445 14:10:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.445 14:10:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.445 14:10:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.445 14:10:01 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:24.445 14:10:01 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:24.445 14:10:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.445 ************************************ 00:01:24.445 START TEST ubsan 00:01:24.445 ************************************ 00:01:24.445 14:10:02 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:24.445 using ubsan 00:01:24.445 00:01:24.445 real 0m0.001s 00:01:24.445 user 0m0.000s 00:01:24.445 sys 0m0.000s 00:01:24.445 14:10:02 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:24.445 14:10:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.445 ************************************ 00:01:24.445 END TEST ubsan 00:01:24.445 ************************************ 00:01:24.707 14:10:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.707 14:10:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.707 14:10:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.707 14:10:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.707 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.707 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.277 Using 'verbs' RDMA provider 00:01:40.826 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.062 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:53.062 Creating mk/config.mk...done. 00:01:53.062 Creating mk/cc.flags.mk...done. 00:01:53.062 Type 'make' to build. 00:01:53.062 14:10:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:53.062 14:10:29 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:53.062 14:10:29 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:53.062 14:10:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.062 ************************************ 00:01:53.062 START TEST make 00:01:53.062 ************************************ 00:01:53.062 14:10:29 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:53.062 make[1]: Nothing to be done for 'all'. 00:01:54.001 The Meson build system 00:01:54.001 Version: 1.3.1 00:01:54.001 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:54.001 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.001 Build type: native build 00:01:54.001 Project name: libvfio-user 00:01:54.001 Project version: 0.0.1 00:01:54.001 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.001 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.001 Host machine cpu family: x86_64 00:01:54.001 Host machine cpu: x86_64 00:01:54.001 Run-time dependency threads found: YES 00:01:54.001 Library dl found: YES 00:01:54.001 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.001 Run-time dependency json-c found: YES 0.17 00:01:54.001 Run-time dependency cmocka found: YES 1.1.7 00:01:54.001 Program pytest-3 found: NO 00:01:54.001 Program flake8 found: NO 00:01:54.001 Program misspell-fixer found: NO 00:01:54.001 Program restructuredtext-lint found: NO 00:01:54.001 Program valgrind found: YES (/usr/bin/valgrind) 00:01:54.001 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.001 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.001 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.001 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.001 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:54.001 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:54.001 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:54.001 Build targets in project: 8 00:01:54.001 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:54.001 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:54.001 00:01:54.001 libvfio-user 0.0.1 00:01:54.001 00:01:54.001 User defined options 00:01:54.001 buildtype : debug 00:01:54.001 default_library: shared 00:01:54.001 libdir : /usr/local/lib 00:01:54.001 00:01:54.001 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.569 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.569 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:54.569 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:54.569 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:54.569 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:54.569 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:54.569 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:54.569 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:54.569 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:54.569 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:54.569 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:54.569 [11/37] Compiling C object samples/null.p/null.c.o 00:01:54.569 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:54.569 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:54.569 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:54.569 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:54.569 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:54.569 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:54.569 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:54.569 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:54.569 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:54.569 [21/37] Compiling C object samples/server.p/server.c.o 00:01:54.569 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:54.569 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:54.569 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:54.569 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:54.569 [26/37] Compiling C object samples/client.p/client.c.o 00:01:54.569 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:54.569 [28/37] Linking target samples/client 00:01:54.830 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:54.830 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:54.830 [31/37] Linking target test/unit_tests 00:01:54.830 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.830 [33/37] Linking target samples/gpio-pci-idio-16 00:01:54.830 [34/37] Linking target samples/server 00:01:54.830 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:54.830 [36/37] Linking target samples/lspci 00:01:54.830 [37/37] Linking target samples/null 00:01:54.830 INFO: autodetecting backend as ninja 00:01:54.830 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.091 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.351 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:55.352 ninja: no work to do. 00:02:00.640 The Meson build system 00:02:00.640 Version: 1.3.1 00:02:00.640 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:00.640 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:00.640 Build type: native build 00:02:00.640 Program cat found: YES (/usr/bin/cat) 00:02:00.640 Project name: DPDK 00:02:00.640 Project version: 24.03.0 00:02:00.640 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.640 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.640 Host machine cpu family: x86_64 00:02:00.640 Host machine cpu: x86_64 00:02:00.640 Message: ## Building in Developer Mode ## 00:02:00.640 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.640 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.640 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.640 Program python3 found: YES (/usr/bin/python3) 00:02:00.640 Program cat found: YES (/usr/bin/cat) 00:02:00.640 Compiler for C supports arguments -march=native: YES 00:02:00.640 Checking for size of "void *" : 8 00:02:00.640 Checking for size of "void *" : 8 (cached) 00:02:00.640 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.640 Library m found: YES 00:02:00.640 Library numa found: YES 00:02:00.640 Has header "numaif.h" : YES 00:02:00.640 Library fdt found: NO 00:02:00.640 Library execinfo found: NO 00:02:00.640 Has header "execinfo.h" : YES 00:02:00.640 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.640 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.640 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.640 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.640 Run-time dependency openssl found: YES 3.0.9 00:02:00.640 Run-time dependency libpcap found: YES 1.10.4 00:02:00.640 Has header "pcap.h" with dependency libpcap: YES 00:02:00.640 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.640 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.640 Compiler for C supports arguments -Wformat: YES 00:02:00.640 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.640 Compiler for C supports arguments -Wformat-security: NO 00:02:00.640 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.640 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.640 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.640 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.640 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.640 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.640 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.640 Compiler for C supports arguments -Wundef: YES 00:02:00.640 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.640 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.640 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.640 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.640 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.640 Program objdump found: YES (/usr/bin/objdump) 00:02:00.640 Compiler for C supports arguments -mavx512f: YES 00:02:00.640 Checking if "AVX512 checking" compiles: YES 00:02:00.640 Fetching value of define "__SSE4_2__" : 1 00:02:00.640 Fetching value of define "__AES__" : 1 00:02:00.640 Fetching value of define "__AVX__" : 1 00:02:00.640 Fetching value of define "__AVX2__" : 1 00:02:00.640 Fetching value of define "__AVX512BW__" : 1 00:02:00.640 Fetching value of define "__AVX512CD__" : 1 00:02:00.640 Fetching value of define "__AVX512DQ__" : 1 00:02:00.640 Fetching value of define "__AVX512F__" : 1 00:02:00.640 Fetching value of define "__AVX512VL__" : 1 00:02:00.640 Fetching value of define "__PCLMUL__" : 1 00:02:00.640 Fetching value of define "__RDRND__" : 1 00:02:00.640 Fetching value of define "__RDSEED__" : 1 00:02:00.640 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:00.640 Fetching value of define "__znver1__" : (undefined) 00:02:00.640 Fetching value of define "__znver2__" : (undefined) 00:02:00.640 Fetching value of define "__znver3__" : (undefined) 00:02:00.640 Fetching value of define "__znver4__" : (undefined) 00:02:00.640 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.640 Message: lib/log: Defining dependency "log" 00:02:00.640 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.640 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.640 Checking for function "getentropy" : NO 00:02:00.640 Message: lib/eal: Defining dependency "eal" 00:02:00.640 Message: lib/ring: Defining dependency "ring" 00:02:00.640 Message: lib/rcu: Defining dependency "rcu" 00:02:00.640 Message: lib/mempool: Defining dependency "mempool" 00:02:00.640 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.640 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.640 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.640 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.640 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.640 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.640 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:00.640 Compiler for C supports arguments -mpclmul: YES 00:02:00.640 Compiler for C supports arguments -maes: YES 00:02:00.640 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.640 Compiler for C supports arguments -mavx512bw: YES 00:02:00.640 Compiler for C supports arguments -mavx512dq: YES 00:02:00.640 Compiler for C supports arguments -mavx512vl: YES 00:02:00.640 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.640 Compiler for C supports arguments -mavx2: YES 00:02:00.640 Compiler for C supports arguments -mavx: YES 00:02:00.640 Message: lib/net: Defining dependency "net" 00:02:00.640 Message: lib/meter: Defining dependency "meter" 00:02:00.640 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.640 Message: lib/pci: Defining dependency "pci" 00:02:00.640 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.640 Message: lib/hash: Defining dependency "hash" 00:02:00.640 Message: lib/timer: Defining dependency "timer" 00:02:00.640 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.640 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.640 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.640 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.640 Message: lib/power: Defining dependency "power" 00:02:00.640 Message: lib/reorder: Defining dependency "reorder" 00:02:00.640 Message: lib/security: Defining dependency "security" 00:02:00.640 Has header "linux/userfaultfd.h" : YES 00:02:00.640 Has header "linux/vduse.h" : YES 00:02:00.640 Message: lib/vhost: Defining dependency "vhost" 00:02:00.640 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.640 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.640 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.640 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.640 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.640 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.640 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.640 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.640 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.640 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.640 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.640 Configuring doxy-api-html.conf using configuration 00:02:00.640 Configuring doxy-api-man.conf using configuration 00:02:00.640 Program mandb found: YES (/usr/bin/mandb) 00:02:00.640 Program sphinx-build found: NO 00:02:00.640 Configuring rte_build_config.h using configuration 00:02:00.640 Message: 00:02:00.640 ================= 00:02:00.640 Applications Enabled 00:02:00.640 ================= 00:02:00.640 00:02:00.640 apps: 00:02:00.640 00:02:00.640 00:02:00.640 Message: 00:02:00.640 ================= 00:02:00.640 Libraries Enabled 00:02:00.640 ================= 00:02:00.640 00:02:00.640 libs: 00:02:00.640 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.640 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.640 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.640 00:02:00.640 Message: 00:02:00.640 =============== 00:02:00.640 Drivers Enabled 00:02:00.640 =============== 00:02:00.640 00:02:00.640 common: 00:02:00.640 00:02:00.640 bus: 00:02:00.640 pci, vdev, 00:02:00.640 mempool: 00:02:00.640 ring, 00:02:00.640 dma: 00:02:00.640 00:02:00.640 net: 00:02:00.640 00:02:00.640 crypto: 00:02:00.640 00:02:00.640 compress: 00:02:00.640 00:02:00.640 vdpa: 00:02:00.640 00:02:00.640 00:02:00.640 Message: 00:02:00.640 ================= 00:02:00.640 Content Skipped 00:02:00.640 ================= 00:02:00.640 00:02:00.640 apps: 00:02:00.640 dumpcap: explicitly disabled via build config 00:02:00.640 graph: explicitly disabled via build config 00:02:00.640 pdump: explicitly disabled via build config 00:02:00.640 proc-info: explicitly disabled via build config 00:02:00.641 test-acl: explicitly disabled via build config 00:02:00.641 test-bbdev: explicitly disabled via build config 00:02:00.641 test-cmdline: explicitly disabled via build config 00:02:00.641 test-compress-perf: explicitly disabled via build config 00:02:00.641 test-crypto-perf: explicitly disabled via build config 00:02:00.641 test-dma-perf: explicitly disabled via build config 00:02:00.641 test-eventdev: explicitly disabled via build config 00:02:00.641 test-fib: explicitly disabled via build config 00:02:00.641 test-flow-perf: explicitly disabled via build config 00:02:00.641 test-gpudev: explicitly disabled via build config 00:02:00.641 test-mldev: explicitly disabled via build config 00:02:00.641 test-pipeline: explicitly disabled via build config 00:02:00.641 test-pmd: explicitly disabled via build config 00:02:00.641 test-regex: explicitly disabled via build config 00:02:00.641 test-sad: explicitly disabled via build config 00:02:00.641 test-security-perf: explicitly disabled via build config 00:02:00.641 00:02:00.641 libs: 00:02:00.641 argparse: explicitly disabled via build config 00:02:00.641 metrics: explicitly disabled via build config 00:02:00.641 acl: explicitly disabled via build config 00:02:00.641 bbdev: explicitly disabled via build config 00:02:00.641 bitratestats: explicitly disabled via build config 00:02:00.641 bpf: explicitly disabled via build config 00:02:00.641 cfgfile: explicitly disabled via build config 00:02:00.641 distributor: explicitly disabled via build config 00:02:00.641 efd: explicitly disabled via build config 00:02:00.641 eventdev: explicitly disabled via build config 00:02:00.641 dispatcher: explicitly disabled via build config 00:02:00.641 gpudev: explicitly disabled via build config 00:02:00.641 gro: explicitly disabled via build config 00:02:00.641 gso: explicitly disabled via build config 00:02:00.641 ip_frag: explicitly disabled via build config 00:02:00.641 jobstats: explicitly disabled via build config 00:02:00.641 latencystats: explicitly disabled via build config 00:02:00.641 lpm: explicitly disabled via build config 00:02:00.641 member: explicitly disabled via build config 00:02:00.641 pcapng: explicitly disabled via build config 00:02:00.641 rawdev: explicitly disabled via build config 00:02:00.641 regexdev: explicitly disabled via build config 00:02:00.641 mldev: explicitly disabled via build config 00:02:00.641 rib: explicitly disabled via build config 00:02:00.641 sched: explicitly disabled via build config 00:02:00.641 stack: explicitly disabled via build config 00:02:00.641 ipsec: explicitly disabled via build config 00:02:00.641 pdcp: explicitly disabled via build config 00:02:00.641 fib: explicitly disabled via build config 00:02:00.641 port: explicitly disabled via build config 00:02:00.641 pdump: explicitly disabled via build config 00:02:00.641 table: explicitly disabled via build config 00:02:00.641 pipeline: explicitly disabled via build config 00:02:00.641 graph: explicitly disabled via build config 00:02:00.641 node: explicitly disabled via build config 00:02:00.641 00:02:00.641 drivers: 00:02:00.641 common/cpt: not in enabled drivers build config 00:02:00.641 common/dpaax: not in enabled drivers build config 00:02:00.641 common/iavf: not in enabled drivers build config 00:02:00.641 common/idpf: not in enabled drivers build config 00:02:00.641 common/ionic: not in enabled drivers build config 00:02:00.641 common/mvep: not in enabled drivers build config 00:02:00.641 common/octeontx: not in enabled drivers build config 00:02:00.641 bus/auxiliary: not in enabled drivers build config 00:02:00.641 bus/cdx: not in enabled drivers build config 00:02:00.641 bus/dpaa: not in enabled drivers build config 00:02:00.641 bus/fslmc: not in enabled drivers build config 00:02:00.641 bus/ifpga: not in enabled drivers build config 00:02:00.641 bus/platform: not in enabled drivers build config 00:02:00.641 bus/uacce: not in enabled drivers build config 00:02:00.641 bus/vmbus: not in enabled drivers build config 00:02:00.641 common/cnxk: not in enabled drivers build config 00:02:00.641 common/mlx5: not in enabled drivers build config 00:02:00.641 common/nfp: not in enabled drivers build config 00:02:00.641 common/nitrox: not in enabled drivers build config 00:02:00.641 common/qat: not in enabled drivers build config 00:02:00.641 common/sfc_efx: not in enabled drivers build config 00:02:00.641 mempool/bucket: not in enabled drivers build config 00:02:00.641 mempool/cnxk: not in enabled drivers build config 00:02:00.641 mempool/dpaa: not in enabled drivers build config 00:02:00.641 mempool/dpaa2: not in enabled drivers build config 00:02:00.641 mempool/octeontx: not in enabled drivers build config 00:02:00.641 mempool/stack: not in enabled drivers build config 00:02:00.641 dma/cnxk: not in enabled drivers build config 00:02:00.641 dma/dpaa: not in enabled drivers build config 00:02:00.641 dma/dpaa2: not in enabled drivers build config 00:02:00.641 dma/hisilicon: not in enabled drivers build config 00:02:00.641 dma/idxd: not in enabled drivers build config 00:02:00.641 dma/ioat: not in enabled drivers build config 00:02:00.641 dma/skeleton: not in enabled drivers build config 00:02:00.641 net/af_packet: not in enabled drivers build config 00:02:00.641 net/af_xdp: not in enabled drivers build config 00:02:00.641 net/ark: not in enabled drivers build config 00:02:00.641 net/atlantic: not in enabled drivers build config 00:02:00.641 net/avp: not in enabled drivers build config 00:02:00.641 net/axgbe: not in enabled drivers build config 00:02:00.641 net/bnx2x: not in enabled drivers build config 00:02:00.641 net/bnxt: not in enabled drivers build config 00:02:00.641 net/bonding: not in enabled drivers build config 00:02:00.641 net/cnxk: not in enabled drivers build config 00:02:00.641 net/cpfl: not in enabled drivers build config 00:02:00.641 net/cxgbe: not in enabled drivers build config 00:02:00.641 net/dpaa: not in enabled drivers build config 00:02:00.641 net/dpaa2: not in enabled drivers build config 00:02:00.641 net/e1000: not in enabled drivers build config 00:02:00.641 net/ena: not in enabled drivers build config 00:02:00.641 net/enetc: not in enabled drivers build config 00:02:00.641 net/enetfec: not in enabled drivers build config 00:02:00.641 net/enic: not in enabled drivers build config 00:02:00.641 net/failsafe: not in enabled drivers build config 00:02:00.641 net/fm10k: not in enabled drivers build config 00:02:00.641 net/gve: not in enabled drivers build config 00:02:00.641 net/hinic: not in enabled drivers build config 00:02:00.641 net/hns3: not in enabled drivers build config 00:02:00.641 net/i40e: not in enabled drivers build config 00:02:00.641 net/iavf: not in enabled drivers build config 00:02:00.641 net/ice: not in enabled drivers build config 00:02:00.641 net/idpf: not in enabled drivers build config 00:02:00.641 net/igc: not in enabled drivers build config 00:02:00.641 net/ionic: not in enabled drivers build config 00:02:00.641 net/ipn3ke: not in enabled drivers build config 00:02:00.641 net/ixgbe: not in enabled drivers build config 00:02:00.641 net/mana: not in enabled drivers build config 00:02:00.641 net/memif: not in enabled drivers build config 00:02:00.641 net/mlx4: not in enabled drivers build config 00:02:00.641 net/mlx5: not in enabled drivers build config 00:02:00.641 net/mvneta: not in enabled drivers build config 00:02:00.641 net/mvpp2: not in enabled drivers build config 00:02:00.641 net/netvsc: not in enabled drivers build config 00:02:00.641 net/nfb: not in enabled drivers build config 00:02:00.641 net/nfp: not in enabled drivers build config 00:02:00.641 net/ngbe: not in enabled drivers build config 00:02:00.641 net/null: not in enabled drivers build config 00:02:00.641 net/octeontx: not in enabled drivers build config 00:02:00.641 net/octeon_ep: not in enabled drivers build config 00:02:00.641 net/pcap: not in enabled drivers build config 00:02:00.641 net/pfe: not in enabled drivers build config 00:02:00.641 net/qede: not in enabled drivers build config 00:02:00.641 net/ring: not in enabled drivers build config 00:02:00.641 net/sfc: not in enabled drivers build config 00:02:00.641 net/softnic: not in enabled drivers build config 00:02:00.641 net/tap: not in enabled drivers build config 00:02:00.641 net/thunderx: not in enabled drivers build config 00:02:00.641 net/txgbe: not in enabled drivers build config 00:02:00.641 net/vdev_netvsc: not in enabled drivers build config 00:02:00.641 net/vhost: not in enabled drivers build config 00:02:00.641 net/virtio: not in enabled drivers build config 00:02:00.641 net/vmxnet3: not in enabled drivers build config 00:02:00.641 raw/*: missing internal dependency, "rawdev" 00:02:00.641 crypto/armv8: not in enabled drivers build config 00:02:00.641 crypto/bcmfs: not in enabled drivers build config 00:02:00.641 crypto/caam_jr: not in enabled drivers build config 00:02:00.641 crypto/ccp: not in enabled drivers build config 00:02:00.641 crypto/cnxk: not in enabled drivers build config 00:02:00.641 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.641 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.641 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.641 crypto/mlx5: not in enabled drivers build config 00:02:00.641 crypto/mvsam: not in enabled drivers build config 00:02:00.641 crypto/nitrox: not in enabled drivers build config 00:02:00.641 crypto/null: not in enabled drivers build config 00:02:00.641 crypto/octeontx: not in enabled drivers build config 00:02:00.641 crypto/openssl: not in enabled drivers build config 00:02:00.641 crypto/scheduler: not in enabled drivers build config 00:02:00.641 crypto/uadk: not in enabled drivers build config 00:02:00.641 crypto/virtio: not in enabled drivers build config 00:02:00.641 compress/isal: not in enabled drivers build config 00:02:00.641 compress/mlx5: not in enabled drivers build config 00:02:00.641 compress/nitrox: not in enabled drivers build config 00:02:00.641 compress/octeontx: not in enabled drivers build config 00:02:00.641 compress/zlib: not in enabled drivers build config 00:02:00.641 regex/*: missing internal dependency, "regexdev" 00:02:00.641 ml/*: missing internal dependency, "mldev" 00:02:00.641 vdpa/ifc: not in enabled drivers build config 00:02:00.641 vdpa/mlx5: not in enabled drivers build config 00:02:00.641 vdpa/nfp: not in enabled drivers build config 00:02:00.641 vdpa/sfc: not in enabled drivers build config 00:02:00.641 event/*: missing internal dependency, "eventdev" 00:02:00.641 baseband/*: missing internal dependency, "bbdev" 00:02:00.641 gpu/*: missing internal dependency, "gpudev" 00:02:00.641 00:02:00.641 00:02:00.642 Build targets in project: 84 00:02:00.642 00:02:00.642 DPDK 24.03.0 00:02:00.642 00:02:00.642 User defined options 00:02:00.642 buildtype : debug 00:02:00.642 default_library : shared 00:02:00.642 libdir : lib 00:02:00.642 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.642 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.642 c_link_args : 00:02:00.642 cpu_instruction_set: native 00:02:00.642 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:00.642 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:00.642 enable_docs : false 00:02:00.642 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.642 enable_kmods : false 00:02:00.642 tests : false 00:02:00.642 00:02:00.642 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.220 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:01.220 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.220 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.220 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.220 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.220 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.220 [6/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:01.220 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.481 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.481 [9/267] Linking static target lib/librte_kvargs.a 00:02:01.481 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.481 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.481 [12/267] Linking static target lib/librte_log.a 00:02:01.481 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.481 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.481 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.481 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.481 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.481 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.481 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.481 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.481 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.481 [22/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.481 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:01.481 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.481 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.481 [26/267] Linking static target lib/librte_pci.a 00:02:01.481 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.481 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.481 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.481 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.481 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.481 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.481 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.481 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.739 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.739 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.739 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.739 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.739 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.739 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.739 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.739 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.739 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.739 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.739 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.739 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.739 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.739 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.739 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.999 [50/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.999 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.999 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.999 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.999 [54/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.999 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.999 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.999 [57/267] Linking static target lib/librte_meter.a 00:02:01.999 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.999 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.999 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.999 [61/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.999 [62/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.999 [63/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.999 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.999 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.999 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.999 [67/267] Linking static target lib/librte_telemetry.a 00:02:01.999 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.999 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.999 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.999 [71/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.999 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.999 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.999 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.999 [75/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.999 [76/267] Linking static target lib/librte_ring.a 00:02:01.999 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.999 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.999 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.999 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.999 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.999 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.999 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:01.999 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.999 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.999 [86/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:01.999 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.999 [88/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.999 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.999 [90/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.999 [91/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.999 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.999 [93/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.999 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.999 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.999 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.999 [97/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.999 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.999 [99/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.999 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.999 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.999 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.999 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.999 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.999 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.999 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.999 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.999 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.999 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.999 [110/267] Linking static target lib/librte_cmdline.a 00:02:01.999 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.999 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.999 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.999 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.999 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.999 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:01.999 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.999 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.999 [119/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.999 [120/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.999 [121/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.999 [122/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.999 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.999 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.999 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.999 [126/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.999 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.999 [128/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.999 [129/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.999 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.999 [131/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.999 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.999 [133/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.999 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.999 [135/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.999 [136/267] Linking static target lib/librte_timer.a 00:02:01.999 [137/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.999 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.999 [139/267] Linking target lib/librte_log.so.24.1 00:02:01.999 [140/267] Linking static target lib/librte_compressdev.a 00:02:01.999 [141/267] Linking static target lib/librte_mempool.a 00:02:01.999 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.999 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.999 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.999 [145/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.999 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.999 [147/267] Linking static target lib/librte_dmadev.a 00:02:01.999 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.999 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.999 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.999 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.999 [152/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.999 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.999 [154/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.999 [155/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.999 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.260 [157/267] Linking static target lib/librte_reorder.a 00:02:02.260 [158/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.260 [159/267] Linking static target lib/librte_net.a 00:02:02.260 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.260 [161/267] Linking static target lib/librte_eal.a 00:02:02.260 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.260 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.260 [164/267] Linking static target lib/librte_mbuf.a 00:02:02.260 [165/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.260 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:02.260 [167/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.260 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.260 [169/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.260 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:02.260 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.260 [172/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.260 [173/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.260 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.260 [175/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:02.260 [176/267] Linking static target lib/librte_rcu.a 00:02:02.260 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:02.260 [178/267] Linking static target lib/librte_power.a 00:02:02.260 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.260 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.260 [181/267] Linking static target lib/librte_security.a 00:02:02.260 [182/267] Linking target lib/librte_kvargs.so.24.1 00:02:02.260 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.260 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:02.260 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:02.260 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.260 [187/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.260 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.260 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.260 [190/267] Linking static target drivers/librte_bus_vdev.a 00:02:02.260 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.260 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:02.260 [193/267] Linking static target lib/librte_hash.a 00:02:02.260 [194/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:02.260 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.260 [196/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:02.521 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.521 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.521 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.521 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.521 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.521 [202/267] Linking static target drivers/librte_mempool_ring.a 00:02:02.521 [203/267] Linking static target drivers/librte_bus_pci.a 00:02:02.521 [204/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.521 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.521 [206/267] Linking static target lib/librte_cryptodev.a 00:02:02.521 [207/267] Linking target lib/librte_telemetry.so.24.1 00:02:02.521 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.521 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.521 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.521 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.782 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.782 [213/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.782 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.782 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.782 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.782 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.043 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.043 [219/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.043 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:03.043 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.043 [222/267] Linking static target lib/librte_ethdev.a 00:02:03.305 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.305 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.305 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.305 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.876 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.876 [228/267] Linking static target lib/librte_vhost.a 00:02:04.818 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.203 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.793 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.736 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.736 [233/267] Linking target lib/librte_eal.so.24.1 00:02:13.997 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.997 [235/267] Linking target lib/librte_ring.so.24.1 00:02:13.997 [236/267] Linking target lib/librte_timer.so.24.1 00:02:13.997 [237/267] Linking target lib/librte_meter.so.24.1 00:02:13.997 [238/267] Linking target lib/librte_pci.so.24.1 00:02:13.997 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.997 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:14.264 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.264 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.264 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.264 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.264 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.264 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:14.264 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.264 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:14.264 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.264 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.622 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.622 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:14.622 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.622 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:14.622 [255/267] Linking target lib/librte_net.so.24.1 00:02:14.622 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:14.622 [257/267] Linking target lib/librte_reorder.so.24.1 00:02:14.882 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.882 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.882 [260/267] Linking target lib/librte_security.so.24.1 00:02:14.882 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:14.882 [262/267] Linking target lib/librte_hash.so.24.1 00:02:14.882 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:14.882 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.143 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.143 [266/267] Linking target lib/librte_power.so.24.1 00:02:15.143 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:15.143 INFO: autodetecting backend as ninja 00:02:15.143 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:16.529 CC lib/ut_mock/mock.o 00:02:16.529 CC lib/ut/ut.o 00:02:16.529 CC lib/log/log.o 00:02:16.529 CC lib/log/log_flags.o 00:02:16.529 CC lib/log/log_deprecated.o 00:02:16.529 LIB libspdk_ut.a 00:02:16.529 LIB libspdk_ut_mock.a 00:02:16.529 LIB libspdk_log.a 00:02:16.529 SO libspdk_ut.so.2.0 00:02:16.529 SO libspdk_ut_mock.so.6.0 00:02:16.529 SO libspdk_log.so.7.0 00:02:16.529 SYMLINK libspdk_ut.so 00:02:16.529 SYMLINK libspdk_ut_mock.so 00:02:16.529 SYMLINK libspdk_log.so 00:02:17.100 CC lib/ioat/ioat.o 00:02:17.100 CC lib/util/base64.o 00:02:17.100 CC lib/util/bit_array.o 00:02:17.100 CC lib/util/crc16.o 00:02:17.100 CXX lib/trace_parser/trace.o 00:02:17.100 CC lib/util/cpuset.o 00:02:17.100 CC lib/util/crc32.o 00:02:17.100 CC lib/dma/dma.o 00:02:17.100 CC lib/util/crc64.o 00:02:17.100 CC lib/util/crc32c.o 00:02:17.100 CC lib/util/crc32_ieee.o 00:02:17.100 CC lib/util/dif.o 00:02:17.100 CC lib/util/fd.o 00:02:17.100 CC lib/util/file.o 00:02:17.100 CC lib/util/hexlify.o 00:02:17.100 CC lib/util/iov.o 00:02:17.100 CC lib/util/math.o 00:02:17.100 CC lib/util/pipe.o 00:02:17.100 CC lib/util/strerror_tls.o 00:02:17.100 CC lib/util/string.o 00:02:17.100 CC lib/util/uuid.o 00:02:17.100 CC lib/util/fd_group.o 00:02:17.100 CC lib/util/xor.o 00:02:17.100 CC lib/util/zipf.o 00:02:17.100 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.100 CC lib/vfio_user/host/vfio_user.o 00:02:17.100 LIB libspdk_dma.a 00:02:17.361 SO libspdk_dma.so.4.0 00:02:17.361 LIB libspdk_ioat.a 00:02:17.361 SO libspdk_ioat.so.7.0 00:02:17.361 SYMLINK libspdk_dma.so 00:02:17.361 SYMLINK libspdk_ioat.so 00:02:17.361 LIB libspdk_vfio_user.a 00:02:17.361 SO libspdk_vfio_user.so.5.0 00:02:17.361 LIB libspdk_util.a 00:02:17.361 SYMLINK libspdk_vfio_user.so 00:02:17.622 SO libspdk_util.so.9.0 00:02:17.622 SYMLINK libspdk_util.so 00:02:17.883 LIB libspdk_trace_parser.a 00:02:17.883 SO libspdk_trace_parser.so.5.0 00:02:17.883 SYMLINK libspdk_trace_parser.so 00:02:18.143 CC lib/env_dpdk/env.o 00:02:18.143 CC lib/env_dpdk/memory.o 00:02:18.143 CC lib/env_dpdk/init.o 00:02:18.143 CC lib/env_dpdk/pci.o 00:02:18.143 CC lib/env_dpdk/threads.o 00:02:18.143 CC lib/env_dpdk/pci_virtio.o 00:02:18.143 CC lib/env_dpdk/pci_ioat.o 00:02:18.143 CC lib/env_dpdk/pci_vmd.o 00:02:18.143 CC lib/env_dpdk/pci_idxd.o 00:02:18.143 CC lib/conf/conf.o 00:02:18.143 CC lib/env_dpdk/pci_event.o 00:02:18.143 CC lib/env_dpdk/sigbus_handler.o 00:02:18.143 CC lib/env_dpdk/pci_dpdk.o 00:02:18.143 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.143 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.143 CC lib/rdma/common.o 00:02:18.143 CC lib/rdma/rdma_verbs.o 00:02:18.143 CC lib/idxd/idxd.o 00:02:18.143 CC lib/idxd/idxd_user.o 00:02:18.143 CC lib/vmd/vmd.o 00:02:18.143 CC lib/json/json_parse.o 00:02:18.143 CC lib/idxd/idxd_kernel.o 00:02:18.143 CC lib/json/json_util.o 00:02:18.143 CC lib/vmd/led.o 00:02:18.143 CC lib/json/json_write.o 00:02:18.403 LIB libspdk_conf.a 00:02:18.403 SO libspdk_conf.so.6.0 00:02:18.403 LIB libspdk_rdma.a 00:02:18.403 LIB libspdk_json.a 00:02:18.403 SO libspdk_rdma.so.6.0 00:02:18.403 SO libspdk_json.so.6.0 00:02:18.403 SYMLINK libspdk_conf.so 00:02:18.403 SYMLINK libspdk_rdma.so 00:02:18.403 SYMLINK libspdk_json.so 00:02:18.664 LIB libspdk_idxd.a 00:02:18.664 SO libspdk_idxd.so.12.0 00:02:18.664 LIB libspdk_vmd.a 00:02:18.664 SO libspdk_vmd.so.6.0 00:02:18.664 SYMLINK libspdk_idxd.so 00:02:18.664 SYMLINK libspdk_vmd.so 00:02:18.924 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.924 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.924 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.924 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.184 LIB libspdk_jsonrpc.a 00:02:19.184 SO libspdk_jsonrpc.so.6.0 00:02:19.184 SYMLINK libspdk_jsonrpc.so 00:02:19.184 LIB libspdk_env_dpdk.a 00:02:19.444 SO libspdk_env_dpdk.so.14.1 00:02:19.444 SYMLINK libspdk_env_dpdk.so 00:02:19.705 CC lib/rpc/rpc.o 00:02:19.705 LIB libspdk_rpc.a 00:02:19.705 SO libspdk_rpc.so.6.0 00:02:19.966 SYMLINK libspdk_rpc.so 00:02:20.226 CC lib/notify/notify.o 00:02:20.226 CC lib/notify/notify_rpc.o 00:02:20.226 CC lib/keyring/keyring.o 00:02:20.226 CC lib/keyring/keyring_rpc.o 00:02:20.226 CC lib/trace/trace.o 00:02:20.226 CC lib/trace/trace_flags.o 00:02:20.226 CC lib/trace/trace_rpc.o 00:02:20.486 LIB libspdk_notify.a 00:02:20.486 SO libspdk_notify.so.6.0 00:02:20.486 LIB libspdk_keyring.a 00:02:20.486 LIB libspdk_trace.a 00:02:20.486 SO libspdk_keyring.so.1.0 00:02:20.486 SYMLINK libspdk_notify.so 00:02:20.486 SO libspdk_trace.so.10.0 00:02:20.745 SYMLINK libspdk_keyring.so 00:02:20.745 SYMLINK libspdk_trace.so 00:02:21.005 CC lib/sock/sock.o 00:02:21.005 CC lib/sock/sock_rpc.o 00:02:21.005 CC lib/thread/thread.o 00:02:21.005 CC lib/thread/iobuf.o 00:02:21.265 LIB libspdk_sock.a 00:02:21.265 SO libspdk_sock.so.9.0 00:02:21.527 SYMLINK libspdk_sock.so 00:02:21.788 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.788 CC lib/nvme/nvme_ctrlr.o 00:02:21.789 CC lib/nvme/nvme_ns_cmd.o 00:02:21.789 CC lib/nvme/nvme_fabric.o 00:02:21.789 CC lib/nvme/nvme_ns.o 00:02:21.789 CC lib/nvme/nvme_pcie_common.o 00:02:21.789 CC lib/nvme/nvme_pcie.o 00:02:21.789 CC lib/nvme/nvme_qpair.o 00:02:21.789 CC lib/nvme/nvme.o 00:02:21.789 CC lib/nvme/nvme_quirks.o 00:02:21.789 CC lib/nvme/nvme_transport.o 00:02:21.789 CC lib/nvme/nvme_discovery.o 00:02:21.789 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.789 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.789 CC lib/nvme/nvme_tcp.o 00:02:21.789 CC lib/nvme/nvme_opal.o 00:02:21.789 CC lib/nvme/nvme_io_msg.o 00:02:21.789 CC lib/nvme/nvme_poll_group.o 00:02:21.789 CC lib/nvme/nvme_stubs.o 00:02:21.789 CC lib/nvme/nvme_zns.o 00:02:21.789 CC lib/nvme/nvme_auth.o 00:02:21.789 CC lib/nvme/nvme_cuse.o 00:02:21.789 CC lib/nvme/nvme_vfio_user.o 00:02:21.789 CC lib/nvme/nvme_rdma.o 00:02:22.359 LIB libspdk_thread.a 00:02:22.359 SO libspdk_thread.so.10.0 00:02:22.359 SYMLINK libspdk_thread.so 00:02:22.621 CC lib/init/json_config.o 00:02:22.621 CC lib/init/subsystem.o 00:02:22.621 CC lib/init/subsystem_rpc.o 00:02:22.621 CC lib/init/rpc.o 00:02:22.621 CC lib/virtio/virtio.o 00:02:22.621 CC lib/vfu_tgt/tgt_endpoint.o 00:02:22.621 CC lib/virtio/virtio_pci.o 00:02:22.621 CC lib/virtio/virtio_vhost_user.o 00:02:22.621 CC lib/vfu_tgt/tgt_rpc.o 00:02:22.621 CC lib/virtio/virtio_vfio_user.o 00:02:22.621 CC lib/accel/accel.o 00:02:22.621 CC lib/accel/accel_rpc.o 00:02:22.621 CC lib/accel/accel_sw.o 00:02:22.621 CC lib/blob/blobstore.o 00:02:22.621 CC lib/blob/request.o 00:02:22.621 CC lib/blob/zeroes.o 00:02:22.621 CC lib/blob/blob_bs_dev.o 00:02:22.881 LIB libspdk_init.a 00:02:22.881 SO libspdk_init.so.5.0 00:02:22.881 LIB libspdk_vfu_tgt.a 00:02:23.142 LIB libspdk_virtio.a 00:02:23.142 SYMLINK libspdk_init.so 00:02:23.142 SO libspdk_vfu_tgt.so.3.0 00:02:23.142 SO libspdk_virtio.so.7.0 00:02:23.142 SYMLINK libspdk_vfu_tgt.so 00:02:23.142 SYMLINK libspdk_virtio.so 00:02:23.402 CC lib/event/app.o 00:02:23.402 CC lib/event/reactor.o 00:02:23.402 CC lib/event/log_rpc.o 00:02:23.402 CC lib/event/app_rpc.o 00:02:23.402 CC lib/event/scheduler_static.o 00:02:23.661 LIB libspdk_accel.a 00:02:23.661 LIB libspdk_nvme.a 00:02:23.661 SO libspdk_accel.so.15.0 00:02:23.661 SYMLINK libspdk_accel.so 00:02:23.661 SO libspdk_nvme.so.13.0 00:02:23.661 LIB libspdk_event.a 00:02:23.661 SO libspdk_event.so.13.1 00:02:23.921 SYMLINK libspdk_event.so 00:02:23.921 SYMLINK libspdk_nvme.so 00:02:23.921 CC lib/bdev/bdev.o 00:02:23.921 CC lib/bdev/bdev_rpc.o 00:02:23.921 CC lib/bdev/part.o 00:02:23.921 CC lib/bdev/bdev_zone.o 00:02:23.921 CC lib/bdev/scsi_nvme.o 00:02:25.304 LIB libspdk_blob.a 00:02:25.304 SO libspdk_blob.so.11.0 00:02:25.304 SYMLINK libspdk_blob.so 00:02:25.874 CC lib/blobfs/blobfs.o 00:02:25.874 CC lib/blobfs/tree.o 00:02:25.874 CC lib/lvol/lvol.o 00:02:26.135 LIB libspdk_bdev.a 00:02:26.395 SO libspdk_bdev.so.15.0 00:02:26.395 SYMLINK libspdk_bdev.so 00:02:26.395 LIB libspdk_blobfs.a 00:02:26.395 SO libspdk_blobfs.so.10.0 00:02:26.655 LIB libspdk_lvol.a 00:02:26.655 SYMLINK libspdk_blobfs.so 00:02:26.655 SO libspdk_lvol.so.10.0 00:02:26.655 SYMLINK libspdk_lvol.so 00:02:26.655 CC lib/nvmf/ctrlr.o 00:02:26.655 CC lib/nvmf/ctrlr_discovery.o 00:02:26.655 CC lib/nbd/nbd.o 00:02:26.655 CC lib/nvmf/ctrlr_bdev.o 00:02:26.655 CC lib/nvmf/subsystem.o 00:02:26.655 CC lib/nbd/nbd_rpc.o 00:02:26.655 CC lib/nvmf/nvmf.o 00:02:26.655 CC lib/nvmf/nvmf_rpc.o 00:02:26.655 CC lib/nvmf/transport.o 00:02:26.655 CC lib/nvmf/tcp.o 00:02:26.655 CC lib/nvmf/stubs.o 00:02:26.655 CC lib/ublk/ublk.o 00:02:26.655 CC lib/scsi/dev.o 00:02:26.655 CC lib/scsi/lun.o 00:02:26.655 CC lib/nvmf/mdns_server.o 00:02:26.655 CC lib/ublk/ublk_rpc.o 00:02:26.655 CC lib/scsi/port.o 00:02:26.655 CC lib/nvmf/vfio_user.o 00:02:26.655 CC lib/scsi/scsi.o 00:02:26.655 CC lib/nvmf/rdma.o 00:02:26.655 CC lib/ftl/ftl_core.o 00:02:26.655 CC lib/scsi/scsi_pr.o 00:02:26.655 CC lib/scsi/scsi_bdev.o 00:02:26.655 CC lib/ftl/ftl_init.o 00:02:26.655 CC lib/nvmf/auth.o 00:02:26.655 CC lib/scsi/scsi_rpc.o 00:02:26.655 CC lib/ftl/ftl_layout.o 00:02:26.655 CC lib/scsi/task.o 00:02:26.655 CC lib/ftl/ftl_debug.o 00:02:26.655 CC lib/ftl/ftl_io.o 00:02:26.655 CC lib/ftl/ftl_sb.o 00:02:26.655 CC lib/ftl/ftl_l2p.o 00:02:26.655 CC lib/ftl/ftl_l2p_flat.o 00:02:26.655 CC lib/ftl/ftl_nv_cache.o 00:02:26.655 CC lib/ftl/ftl_band.o 00:02:26.655 CC lib/ftl/ftl_band_ops.o 00:02:26.655 CC lib/ftl/ftl_writer.o 00:02:26.655 CC lib/ftl/ftl_rq.o 00:02:26.655 CC lib/ftl/ftl_reloc.o 00:02:26.655 CC lib/ftl/ftl_l2p_cache.o 00:02:26.655 CC lib/ftl/ftl_p2l.o 00:02:26.655 CC lib/ftl/mngt/ftl_mngt.o 00:02:26.655 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:26.655 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:26.655 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:26.655 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:26.912 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:26.912 CC lib/ftl/utils/ftl_md.o 00:02:26.913 CC lib/ftl/utils/ftl_mempool.o 00:02:26.913 CC lib/ftl/utils/ftl_conf.o 00:02:26.913 CC lib/ftl/utils/ftl_bitmap.o 00:02:26.913 CC lib/ftl/utils/ftl_property.o 00:02:26.913 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:26.913 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.913 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.913 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.913 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.913 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.913 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.913 CC lib/ftl/ftl_trace.o 00:02:26.913 CC lib/ftl/base/ftl_base_dev.o 00:02:27.481 LIB libspdk_nbd.a 00:02:27.481 LIB libspdk_scsi.a 00:02:27.481 SO libspdk_nbd.so.7.0 00:02:27.481 SO libspdk_scsi.so.9.0 00:02:27.481 SYMLINK libspdk_nbd.so 00:02:27.481 LIB libspdk_ublk.a 00:02:27.481 SYMLINK libspdk_scsi.so 00:02:27.481 SO libspdk_ublk.so.3.0 00:02:27.481 SYMLINK libspdk_ublk.so 00:02:27.741 CC lib/vhost/vhost.o 00:02:27.741 CC lib/iscsi/conn.o 00:02:27.741 LIB libspdk_ftl.a 00:02:27.741 CC lib/vhost/vhost_rpc.o 00:02:27.741 CC lib/iscsi/init_grp.o 00:02:27.741 CC lib/vhost/vhost_scsi.o 00:02:27.741 CC lib/iscsi/iscsi.o 00:02:27.741 CC lib/vhost/vhost_blk.o 00:02:27.741 CC lib/iscsi/md5.o 00:02:27.741 CC lib/vhost/rte_vhost_user.o 00:02:27.741 CC lib/iscsi/param.o 00:02:27.741 CC lib/iscsi/portal_grp.o 00:02:27.742 CC lib/iscsi/tgt_node.o 00:02:27.742 CC lib/iscsi/iscsi_subsystem.o 00:02:27.742 CC lib/iscsi/iscsi_rpc.o 00:02:27.742 CC lib/iscsi/task.o 00:02:28.001 SO libspdk_ftl.so.9.0 00:02:28.262 SYMLINK libspdk_ftl.so 00:02:28.522 LIB libspdk_nvmf.a 00:02:28.522 SO libspdk_nvmf.so.18.1 00:02:28.782 LIB libspdk_vhost.a 00:02:28.782 SO libspdk_vhost.so.8.0 00:02:28.782 SYMLINK libspdk_nvmf.so 00:02:28.782 SYMLINK libspdk_vhost.so 00:02:29.058 LIB libspdk_iscsi.a 00:02:29.059 SO libspdk_iscsi.so.8.0 00:02:29.059 SYMLINK libspdk_iscsi.so 00:02:29.631 CC module/vfu_device/vfu_virtio.o 00:02:29.631 CC module/vfu_device/vfu_virtio_blk.o 00:02:29.631 CC module/vfu_device/vfu_virtio_scsi.o 00:02:29.631 CC module/vfu_device/vfu_virtio_rpc.o 00:02:29.631 CC module/env_dpdk/env_dpdk_rpc.o 00:02:29.933 CC module/accel/dsa/accel_dsa.o 00:02:29.934 LIB libspdk_env_dpdk_rpc.a 00:02:29.934 CC module/accel/ioat/accel_ioat.o 00:02:29.934 CC module/accel/dsa/accel_dsa_rpc.o 00:02:29.934 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.934 CC module/accel/error/accel_error.o 00:02:29.934 CC module/accel/error/accel_error_rpc.o 00:02:29.934 CC module/keyring/linux/keyring.o 00:02:29.934 CC module/keyring/linux/keyring_rpc.o 00:02:29.934 CC module/accel/iaa/accel_iaa.o 00:02:29.934 CC module/keyring/file/keyring.o 00:02:29.934 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.934 CC module/keyring/file/keyring_rpc.o 00:02:29.934 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.934 CC module/blob/bdev/blob_bdev.o 00:02:29.934 CC module/sock/posix/posix.o 00:02:29.934 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.934 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.934 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.934 SYMLINK libspdk_env_dpdk_rpc.so 00:02:30.193 LIB libspdk_scheduler_dpdk_governor.a 00:02:30.193 LIB libspdk_keyring_file.a 00:02:30.193 LIB libspdk_scheduler_gscheduler.a 00:02:30.193 LIB libspdk_keyring_linux.a 00:02:30.193 LIB libspdk_accel_ioat.a 00:02:30.194 LIB libspdk_accel_error.a 00:02:30.194 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:30.194 SO libspdk_accel_ioat.so.6.0 00:02:30.194 SO libspdk_keyring_file.so.1.0 00:02:30.194 SO libspdk_accel_error.so.2.0 00:02:30.194 SO libspdk_scheduler_gscheduler.so.4.0 00:02:30.194 SO libspdk_keyring_linux.so.1.0 00:02:30.194 LIB libspdk_accel_iaa.a 00:02:30.194 LIB libspdk_scheduler_dynamic.a 00:02:30.194 LIB libspdk_accel_dsa.a 00:02:30.194 SYMLINK libspdk_accel_ioat.so 00:02:30.194 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:30.194 SO libspdk_scheduler_dynamic.so.4.0 00:02:30.194 SO libspdk_accel_iaa.so.3.0 00:02:30.194 SO libspdk_accel_dsa.so.5.0 00:02:30.194 SYMLINK libspdk_scheduler_gscheduler.so 00:02:30.194 LIB libspdk_blob_bdev.a 00:02:30.194 SYMLINK libspdk_accel_error.so 00:02:30.194 SYMLINK libspdk_keyring_file.so 00:02:30.194 SYMLINK libspdk_keyring_linux.so 00:02:30.194 SO libspdk_blob_bdev.so.11.0 00:02:30.194 SYMLINK libspdk_scheduler_dynamic.so 00:02:30.194 SYMLINK libspdk_accel_dsa.so 00:02:30.194 SYMLINK libspdk_accel_iaa.so 00:02:30.194 LIB libspdk_vfu_device.a 00:02:30.194 SYMLINK libspdk_blob_bdev.so 00:02:30.454 SO libspdk_vfu_device.so.3.0 00:02:30.454 SYMLINK libspdk_vfu_device.so 00:02:30.454 LIB libspdk_sock_posix.a 00:02:30.715 SO libspdk_sock_posix.so.6.0 00:02:30.715 SYMLINK libspdk_sock_posix.so 00:02:30.976 CC module/bdev/delay/vbdev_delay.o 00:02:30.976 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:30.976 CC module/blobfs/bdev/blobfs_bdev.o 00:02:30.976 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:30.976 CC module/bdev/passthru/vbdev_passthru.o 00:02:30.976 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:30.976 CC module/bdev/error/vbdev_error.o 00:02:30.976 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:30.977 CC module/bdev/lvol/vbdev_lvol.o 00:02:30.977 CC module/bdev/null/bdev_null.o 00:02:30.977 CC module/bdev/error/vbdev_error_rpc.o 00:02:30.977 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:30.977 CC module/bdev/gpt/gpt.o 00:02:30.977 CC module/bdev/null/bdev_null_rpc.o 00:02:30.977 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:30.977 CC module/bdev/gpt/vbdev_gpt.o 00:02:30.977 CC module/bdev/malloc/bdev_malloc.o 00:02:30.977 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:30.977 CC module/bdev/iscsi/bdev_iscsi.o 00:02:30.977 CC module/bdev/raid/bdev_raid.o 00:02:30.977 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:30.977 CC module/bdev/raid/bdev_raid_rpc.o 00:02:30.977 CC module/bdev/raid/bdev_raid_sb.o 00:02:30.977 CC module/bdev/raid/raid0.o 00:02:30.977 CC module/bdev/split/vbdev_split.o 00:02:30.977 CC module/bdev/raid/raid1.o 00:02:30.977 CC module/bdev/raid/concat.o 00:02:30.977 CC module/bdev/nvme/bdev_nvme.o 00:02:30.977 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:30.977 CC module/bdev/split/vbdev_split_rpc.o 00:02:30.977 CC module/bdev/ftl/bdev_ftl.o 00:02:30.977 CC module/bdev/nvme/nvme_rpc.o 00:02:30.977 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:30.977 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:30.977 CC module/bdev/nvme/bdev_mdns_client.o 00:02:30.977 CC module/bdev/aio/bdev_aio.o 00:02:30.977 CC module/bdev/nvme/vbdev_opal.o 00:02:30.977 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:30.977 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:30.977 CC module/bdev/aio/bdev_aio_rpc.o 00:02:30.977 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:30.977 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.237 LIB libspdk_blobfs_bdev.a 00:02:31.237 SO libspdk_blobfs_bdev.so.6.0 00:02:31.237 LIB libspdk_bdev_split.a 00:02:31.237 LIB libspdk_bdev_null.a 00:02:31.237 SO libspdk_bdev_null.so.6.0 00:02:31.237 LIB libspdk_bdev_error.a 00:02:31.237 SO libspdk_bdev_split.so.6.0 00:02:31.237 LIB libspdk_bdev_gpt.a 00:02:31.237 SYMLINK libspdk_blobfs_bdev.so 00:02:31.237 LIB libspdk_bdev_malloc.a 00:02:31.237 SO libspdk_bdev_error.so.6.0 00:02:31.237 LIB libspdk_bdev_passthru.a 00:02:31.237 LIB libspdk_bdev_ftl.a 00:02:31.237 SO libspdk_bdev_malloc.so.6.0 00:02:31.237 SO libspdk_bdev_gpt.so.6.0 00:02:31.237 LIB libspdk_bdev_zone_block.a 00:02:31.237 SYMLINK libspdk_bdev_split.so 00:02:31.237 SYMLINK libspdk_bdev_null.so 00:02:31.237 LIB libspdk_bdev_delay.a 00:02:31.237 SO libspdk_bdev_ftl.so.6.0 00:02:31.237 LIB libspdk_bdev_aio.a 00:02:31.237 SO libspdk_bdev_passthru.so.6.0 00:02:31.237 SO libspdk_bdev_zone_block.so.6.0 00:02:31.237 SYMLINK libspdk_bdev_error.so 00:02:31.237 SYMLINK libspdk_bdev_malloc.so 00:02:31.237 LIB libspdk_bdev_iscsi.a 00:02:31.238 SO libspdk_bdev_delay.so.6.0 00:02:31.238 SYMLINK libspdk_bdev_gpt.so 00:02:31.238 SO libspdk_bdev_aio.so.6.0 00:02:31.238 SYMLINK libspdk_bdev_ftl.so 00:02:31.238 SYMLINK libspdk_bdev_zone_block.so 00:02:31.238 SO libspdk_bdev_iscsi.so.6.0 00:02:31.238 SYMLINK libspdk_bdev_passthru.so 00:02:31.238 SYMLINK libspdk_bdev_delay.so 00:02:31.238 LIB libspdk_bdev_lvol.a 00:02:31.499 LIB libspdk_bdev_virtio.a 00:02:31.499 SYMLINK libspdk_bdev_aio.so 00:02:31.499 SO libspdk_bdev_lvol.so.6.0 00:02:31.499 SYMLINK libspdk_bdev_iscsi.so 00:02:31.499 SO libspdk_bdev_virtio.so.6.0 00:02:31.499 SYMLINK libspdk_bdev_lvol.so 00:02:31.499 SYMLINK libspdk_bdev_virtio.so 00:02:31.760 LIB libspdk_bdev_raid.a 00:02:31.760 SO libspdk_bdev_raid.so.6.0 00:02:32.020 SYMLINK libspdk_bdev_raid.so 00:02:32.964 LIB libspdk_bdev_nvme.a 00:02:32.964 SO libspdk_bdev_nvme.so.7.0 00:02:32.964 SYMLINK libspdk_bdev_nvme.so 00:02:33.593 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:33.593 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:33.593 CC module/event/subsystems/keyring/keyring.o 00:02:33.593 CC module/event/subsystems/iobuf/iobuf.o 00:02:33.593 CC module/event/subsystems/sock/sock.o 00:02:33.593 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:33.593 CC module/event/subsystems/scheduler/scheduler.o 00:02:33.593 CC module/event/subsystems/vmd/vmd.o 00:02:33.593 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:33.881 LIB libspdk_event_keyring.a 00:02:33.881 LIB libspdk_event_vhost_blk.a 00:02:33.881 LIB libspdk_event_scheduler.a 00:02:33.881 LIB libspdk_event_vfu_tgt.a 00:02:33.881 LIB libspdk_event_sock.a 00:02:33.881 LIB libspdk_event_vmd.a 00:02:33.881 LIB libspdk_event_iobuf.a 00:02:33.881 SO libspdk_event_keyring.so.1.0 00:02:33.881 SO libspdk_event_scheduler.so.4.0 00:02:33.881 SO libspdk_event_vhost_blk.so.3.0 00:02:33.881 SO libspdk_event_vfu_tgt.so.3.0 00:02:33.881 SO libspdk_event_sock.so.5.0 00:02:33.881 SO libspdk_event_vmd.so.6.0 00:02:33.881 SO libspdk_event_iobuf.so.3.0 00:02:33.881 SYMLINK libspdk_event_keyring.so 00:02:33.881 SYMLINK libspdk_event_scheduler.so 00:02:33.881 SYMLINK libspdk_event_vhost_blk.so 00:02:33.881 SYMLINK libspdk_event_vfu_tgt.so 00:02:33.881 SYMLINK libspdk_event_sock.so 00:02:33.881 SYMLINK libspdk_event_vmd.so 00:02:33.881 SYMLINK libspdk_event_iobuf.so 00:02:34.454 CC module/event/subsystems/accel/accel.o 00:02:34.454 LIB libspdk_event_accel.a 00:02:34.454 SO libspdk_event_accel.so.6.0 00:02:34.715 SYMLINK libspdk_event_accel.so 00:02:34.976 CC module/event/subsystems/bdev/bdev.o 00:02:34.976 LIB libspdk_event_bdev.a 00:02:35.238 SO libspdk_event_bdev.so.6.0 00:02:35.238 SYMLINK libspdk_event_bdev.so 00:02:35.499 CC module/event/subsystems/scsi/scsi.o 00:02:35.499 CC module/event/subsystems/nbd/nbd.o 00:02:35.499 CC module/event/subsystems/ublk/ublk.o 00:02:35.499 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:35.499 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:35.760 LIB libspdk_event_nbd.a 00:02:35.760 LIB libspdk_event_scsi.a 00:02:35.760 LIB libspdk_event_ublk.a 00:02:35.760 SO libspdk_event_nbd.so.6.0 00:02:35.760 SO libspdk_event_scsi.so.6.0 00:02:35.760 SO libspdk_event_ublk.so.3.0 00:02:35.760 LIB libspdk_event_nvmf.a 00:02:35.760 SYMLINK libspdk_event_nbd.so 00:02:35.760 SYMLINK libspdk_event_scsi.so 00:02:35.760 SYMLINK libspdk_event_ublk.so 00:02:35.760 SO libspdk_event_nvmf.so.6.0 00:02:36.022 SYMLINK libspdk_event_nvmf.so 00:02:36.022 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.284 LIB libspdk_event_iscsi.a 00:02:36.284 LIB libspdk_event_vhost_scsi.a 00:02:36.284 SO libspdk_event_iscsi.so.6.0 00:02:36.284 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.544 SYMLINK libspdk_event_iscsi.so 00:02:36.544 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.544 SO libspdk.so.6.0 00:02:36.544 SYMLINK libspdk.so 00:02:37.114 CC app/spdk_top/spdk_top.o 00:02:37.114 CC app/trace_record/trace_record.o 00:02:37.114 CC app/spdk_lspci/spdk_lspci.o 00:02:37.114 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.114 CC app/spdk_nvme_perf/perf.o 00:02:37.114 CC app/spdk_nvme_identify/identify.o 00:02:37.114 CC test/rpc_client/rpc_client_test.o 00:02:37.114 CXX app/trace/trace.o 00:02:37.114 TEST_HEADER include/spdk/accel.h 00:02:37.114 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.114 TEST_HEADER include/spdk/assert.h 00:02:37.114 CC app/nvmf_tgt/nvmf_main.o 00:02:37.114 TEST_HEADER include/spdk/accel_module.h 00:02:37.114 TEST_HEADER include/spdk/barrier.h 00:02:37.114 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.114 TEST_HEADER include/spdk/bdev_module.h 00:02:37.114 TEST_HEADER include/spdk/base64.h 00:02:37.114 TEST_HEADER include/spdk/bdev.h 00:02:37.114 CC app/spdk_dd/spdk_dd.o 00:02:37.114 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.114 TEST_HEADER include/spdk/bit_array.h 00:02:37.114 TEST_HEADER include/spdk/bit_pool.h 00:02:37.114 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.114 TEST_HEADER include/spdk/blobfs.h 00:02:37.114 TEST_HEADER include/spdk/blob.h 00:02:37.114 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.114 TEST_HEADER include/spdk/conf.h 00:02:37.114 TEST_HEADER include/spdk/cpuset.h 00:02:37.114 TEST_HEADER include/spdk/config.h 00:02:37.114 TEST_HEADER include/spdk/crc16.h 00:02:37.114 CC app/spdk_tgt/spdk_tgt.o 00:02:37.114 TEST_HEADER include/spdk/crc32.h 00:02:37.114 TEST_HEADER include/spdk/crc64.h 00:02:37.114 TEST_HEADER include/spdk/dif.h 00:02:37.114 CC app/vhost/vhost.o 00:02:37.114 TEST_HEADER include/spdk/dma.h 00:02:37.114 TEST_HEADER include/spdk/endian.h 00:02:37.114 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.114 TEST_HEADER include/spdk/event.h 00:02:37.114 TEST_HEADER include/spdk/env.h 00:02:37.114 TEST_HEADER include/spdk/fd_group.h 00:02:37.114 TEST_HEADER include/spdk/ftl.h 00:02:37.114 TEST_HEADER include/spdk/fd.h 00:02:37.114 TEST_HEADER include/spdk/file.h 00:02:37.114 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.114 TEST_HEADER include/spdk/hexlify.h 00:02:37.114 TEST_HEADER include/spdk/histogram_data.h 00:02:37.114 TEST_HEADER include/spdk/idxd.h 00:02:37.114 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.114 TEST_HEADER include/spdk/init.h 00:02:37.114 TEST_HEADER include/spdk/ioat.h 00:02:37.114 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.114 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.114 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.114 TEST_HEADER include/spdk/json.h 00:02:37.114 TEST_HEADER include/spdk/keyring_module.h 00:02:37.114 TEST_HEADER include/spdk/keyring.h 00:02:37.114 TEST_HEADER include/spdk/likely.h 00:02:37.114 TEST_HEADER include/spdk/log.h 00:02:37.114 TEST_HEADER include/spdk/lvol.h 00:02:37.114 TEST_HEADER include/spdk/memory.h 00:02:37.114 TEST_HEADER include/spdk/nbd.h 00:02:37.114 TEST_HEADER include/spdk/mmio.h 00:02:37.114 TEST_HEADER include/spdk/notify.h 00:02:37.114 TEST_HEADER include/spdk/nvme.h 00:02:37.114 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.114 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.114 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.114 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.114 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.114 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.114 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.114 TEST_HEADER include/spdk/nvmf.h 00:02:37.114 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.114 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.114 TEST_HEADER include/spdk/opal_spec.h 00:02:37.114 TEST_HEADER include/spdk/pci_ids.h 00:02:37.114 TEST_HEADER include/spdk/opal.h 00:02:37.114 TEST_HEADER include/spdk/pipe.h 00:02:37.114 TEST_HEADER include/spdk/queue.h 00:02:37.114 TEST_HEADER include/spdk/reduce.h 00:02:37.114 TEST_HEADER include/spdk/rpc.h 00:02:37.114 TEST_HEADER include/spdk/scsi.h 00:02:37.114 TEST_HEADER include/spdk/scheduler.h 00:02:37.114 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.114 TEST_HEADER include/spdk/sock.h 00:02:37.115 TEST_HEADER include/spdk/stdinc.h 00:02:37.115 TEST_HEADER include/spdk/string.h 00:02:37.115 TEST_HEADER include/spdk/thread.h 00:02:37.115 TEST_HEADER include/spdk/trace_parser.h 00:02:37.115 TEST_HEADER include/spdk/trace.h 00:02:37.115 TEST_HEADER include/spdk/ublk.h 00:02:37.115 TEST_HEADER include/spdk/tree.h 00:02:37.115 TEST_HEADER include/spdk/uuid.h 00:02:37.115 TEST_HEADER include/spdk/util.h 00:02:37.115 TEST_HEADER include/spdk/version.h 00:02:37.115 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.115 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.115 TEST_HEADER include/spdk/vhost.h 00:02:37.115 TEST_HEADER include/spdk/vmd.h 00:02:37.115 TEST_HEADER include/spdk/xor.h 00:02:37.115 TEST_HEADER include/spdk/zipf.h 00:02:37.115 CXX test/cpp_headers/accel.o 00:02:37.115 CXX test/cpp_headers/accel_module.o 00:02:37.115 CXX test/cpp_headers/barrier.o 00:02:37.115 CXX test/cpp_headers/assert.o 00:02:37.115 CXX test/cpp_headers/bdev.o 00:02:37.115 CXX test/cpp_headers/base64.o 00:02:37.115 CXX test/cpp_headers/bdev_module.o 00:02:37.115 CXX test/cpp_headers/bdev_zone.o 00:02:37.115 CXX test/cpp_headers/bit_array.o 00:02:37.382 CXX test/cpp_headers/bit_pool.o 00:02:37.382 CXX test/cpp_headers/blobfs.o 00:02:37.382 CXX test/cpp_headers/blob_bdev.o 00:02:37.382 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.382 CXX test/cpp_headers/blob.o 00:02:37.382 CXX test/cpp_headers/conf.o 00:02:37.382 CXX test/cpp_headers/config.o 00:02:37.382 CXX test/cpp_headers/crc16.o 00:02:37.382 CXX test/cpp_headers/crc32.o 00:02:37.382 CXX test/cpp_headers/cpuset.o 00:02:37.382 CXX test/cpp_headers/crc64.o 00:02:37.382 CXX test/cpp_headers/dif.o 00:02:37.382 CXX test/cpp_headers/dma.o 00:02:37.382 CXX test/cpp_headers/endian.o 00:02:37.382 CXX test/cpp_headers/env.o 00:02:37.382 CXX test/cpp_headers/fd_group.o 00:02:37.382 CXX test/cpp_headers/env_dpdk.o 00:02:37.382 CXX test/cpp_headers/event.o 00:02:37.382 CXX test/cpp_headers/fd.o 00:02:37.382 CXX test/cpp_headers/file.o 00:02:37.382 CXX test/cpp_headers/hexlify.o 00:02:37.382 CXX test/cpp_headers/ftl.o 00:02:37.382 CXX test/cpp_headers/idxd.o 00:02:37.382 CXX test/cpp_headers/histogram_data.o 00:02:37.382 CXX test/cpp_headers/gpt_spec.o 00:02:37.382 CXX test/cpp_headers/idxd_spec.o 00:02:37.382 CXX test/cpp_headers/ioat.o 00:02:37.382 CXX test/cpp_headers/ioat_spec.o 00:02:37.382 CXX test/cpp_headers/init.o 00:02:37.382 CXX test/cpp_headers/iscsi_spec.o 00:02:37.382 CXX test/cpp_headers/jsonrpc.o 00:02:37.382 CXX test/cpp_headers/json.o 00:02:37.382 CXX test/cpp_headers/keyring.o 00:02:37.382 CXX test/cpp_headers/keyring_module.o 00:02:37.382 CXX test/cpp_headers/likely.o 00:02:37.382 CXX test/cpp_headers/log.o 00:02:37.382 CXX test/cpp_headers/lvol.o 00:02:37.382 CXX test/cpp_headers/mmio.o 00:02:37.382 CXX test/cpp_headers/nvme.o 00:02:37.382 CXX test/cpp_headers/memory.o 00:02:37.382 CXX test/cpp_headers/nbd.o 00:02:37.382 CXX test/cpp_headers/notify.o 00:02:37.382 CC examples/sock/hello_world/hello_sock.o 00:02:37.382 CXX test/cpp_headers/nvme_intel.o 00:02:37.382 CC examples/ioat/verify/verify.o 00:02:37.382 CXX test/cpp_headers/nvme_ocssd.o 00:02:37.382 CXX test/cpp_headers/nvme_spec.o 00:02:37.382 CXX test/cpp_headers/nvme_zns.o 00:02:37.382 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.382 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:37.382 CXX test/cpp_headers/nvmf.o 00:02:37.382 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.382 CXX test/cpp_headers/nvmf_transport.o 00:02:37.382 CXX test/cpp_headers/nvmf_spec.o 00:02:37.382 CXX test/cpp_headers/opal.o 00:02:37.382 CXX test/cpp_headers/opal_spec.o 00:02:37.382 CXX test/cpp_headers/pci_ids.o 00:02:37.382 CC examples/vmd/led/led.o 00:02:37.382 CC examples/accel/perf/accel_perf.o 00:02:37.382 CXX test/cpp_headers/queue.o 00:02:37.382 CXX test/cpp_headers/pipe.o 00:02:37.382 CXX test/cpp_headers/reduce.o 00:02:37.382 CC examples/nvme/arbitration/arbitration.o 00:02:37.382 CXX test/cpp_headers/scheduler.o 00:02:37.382 CXX test/cpp_headers/rpc.o 00:02:37.382 CC examples/util/zipf/zipf.o 00:02:37.382 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.382 CC app/fio/nvme/fio_plugin.o 00:02:37.382 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:37.382 CC test/env/memory/memory_ut.o 00:02:37.382 CC test/event/reactor/reactor.o 00:02:37.382 CC examples/nvme/reconnect/reconnect.o 00:02:37.382 CC examples/nvme/hello_world/hello_world.o 00:02:37.382 CC examples/blob/cli/blobcli.o 00:02:37.382 CC examples/ioat/perf/perf.o 00:02:37.382 CC examples/idxd/perf/perf.o 00:02:37.382 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:37.382 CC test/event/event_perf/event_perf.o 00:02:37.382 CC test/app/histogram_perf/histogram_perf.o 00:02:37.382 CC test/event/reactor_perf/reactor_perf.o 00:02:37.382 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:37.382 CC examples/nvme/abort/abort.o 00:02:37.382 CC test/thread/poller_perf/poller_perf.o 00:02:37.382 CC test/app/jsoncat/jsoncat.o 00:02:37.382 CC examples/nvme/hotplug/hotplug.o 00:02:37.382 CC test/nvme/simple_copy/simple_copy.o 00:02:37.382 CC test/env/vtophys/vtophys.o 00:02:37.382 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.382 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.382 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.382 CC test/nvme/sgl/sgl.o 00:02:37.382 CC test/app/stub/stub.o 00:02:37.382 CC test/env/pci/pci_ut.o 00:02:37.382 CC test/accel/dif/dif.o 00:02:37.382 CC test/nvme/reset/reset.o 00:02:37.382 CC examples/blob/hello_world/hello_blob.o 00:02:37.382 CC test/nvme/e2edp/nvme_dp.o 00:02:37.382 CC test/nvme/aer/aer.o 00:02:37.382 CC test/nvme/connect_stress/connect_stress.o 00:02:37.382 CC test/nvme/startup/startup.o 00:02:37.382 CC examples/nvmf/nvmf/nvmf.o 00:02:37.382 CC test/nvme/compliance/nvme_compliance.o 00:02:37.382 CC test/nvme/overhead/overhead.o 00:02:37.382 CC test/nvme/boot_partition/boot_partition.o 00:02:37.382 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.382 CC app/fio/bdev/fio_plugin.o 00:02:37.382 CC test/nvme/reserve/reserve.o 00:02:37.382 CC test/nvme/err_injection/err_injection.o 00:02:37.382 CC test/event/app_repeat/app_repeat.o 00:02:37.382 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.382 CC examples/thread/thread/thread_ex.o 00:02:37.382 CXX test/cpp_headers/scsi.o 00:02:37.382 CC test/app/bdev_svc/bdev_svc.o 00:02:37.382 CC test/event/scheduler/scheduler.o 00:02:37.382 CC test/nvme/cuse/cuse.o 00:02:37.382 CC test/nvme/fdp/fdp.o 00:02:37.382 LINK spdk_lspci 00:02:37.382 CC test/bdev/bdevio/bdevio.o 00:02:37.382 CC test/blobfs/mkfs/mkfs.o 00:02:37.382 CC test/dma/test_dma/test_dma.o 00:02:37.646 LINK rpc_client_test 00:02:37.646 LINK spdk_nvme_discover 00:02:37.646 LINK nvmf_tgt 00:02:37.646 LINK interrupt_tgt 00:02:37.912 CC test/lvol/esnap/esnap.o 00:02:37.912 LINK spdk_trace_record 00:02:37.912 LINK spdk_tgt 00:02:37.912 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.912 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.912 LINK vhost 00:02:37.912 CC test/env/mem_callbacks/mem_callbacks.o 00:02:37.912 LINK iscsi_tgt 00:02:37.912 LINK reactor 00:02:37.912 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:37.912 LINK lsvmd 00:02:38.174 LINK histogram_perf 00:02:38.174 LINK led 00:02:38.174 LINK cmb_copy 00:02:38.174 LINK event_perf 00:02:38.174 LINK jsoncat 00:02:38.174 LINK reactor_perf 00:02:38.174 LINK zipf 00:02:38.174 LINK app_repeat 00:02:38.174 LINK pmr_persistence 00:02:38.174 LINK vtophys 00:02:38.174 LINK startup 00:02:38.174 LINK err_injection 00:02:38.174 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:38.174 LINK poller_perf 00:02:38.174 LINK verify 00:02:38.174 LINK doorbell_aers 00:02:38.174 LINK env_dpdk_post_init 00:02:38.174 LINK spdk_dd 00:02:38.174 LINK hello_world 00:02:38.174 CXX test/cpp_headers/scsi_spec.o 00:02:38.174 CXX test/cpp_headers/sock.o 00:02:38.174 CXX test/cpp_headers/stdinc.o 00:02:38.174 CXX test/cpp_headers/string.o 00:02:38.174 LINK hello_sock 00:02:38.174 LINK stub 00:02:38.174 CXX test/cpp_headers/thread.o 00:02:38.174 LINK boot_partition 00:02:38.174 CXX test/cpp_headers/trace.o 00:02:38.174 CXX test/cpp_headers/trace_parser.o 00:02:38.174 CXX test/cpp_headers/tree.o 00:02:38.174 CXX test/cpp_headers/ublk.o 00:02:38.174 CXX test/cpp_headers/util.o 00:02:38.174 CXX test/cpp_headers/uuid.o 00:02:38.174 LINK ioat_perf 00:02:38.174 CXX test/cpp_headers/version.o 00:02:38.174 CXX test/cpp_headers/vfio_user_pci.o 00:02:38.174 CXX test/cpp_headers/vfio_user_spec.o 00:02:38.174 LINK simple_copy 00:02:38.174 CXX test/cpp_headers/vhost.o 00:02:38.174 CXX test/cpp_headers/vmd.o 00:02:38.174 CXX test/cpp_headers/xor.o 00:02:38.174 LINK bdev_svc 00:02:38.174 CXX test/cpp_headers/zipf.o 00:02:38.174 LINK sgl 00:02:38.174 LINK connect_stress 00:02:38.174 LINK fused_ordering 00:02:38.174 LINK hotplug 00:02:38.174 LINK mkfs 00:02:38.174 LINK hello_bdev 00:02:38.433 LINK thread 00:02:38.433 LINK aer 00:02:38.433 LINK overhead 00:02:38.433 LINK reserve 00:02:38.433 LINK hello_blob 00:02:38.433 LINK arbitration 00:02:38.433 LINK scheduler 00:02:38.433 LINK idxd_perf 00:02:38.433 LINK reset 00:02:38.433 LINK nvme_dp 00:02:38.433 LINK fdp 00:02:38.433 LINK nvmf 00:02:38.433 LINK nvme_compliance 00:02:38.433 LINK reconnect 00:02:38.433 LINK abort 00:02:38.433 LINK spdk_trace 00:02:38.433 LINK accel_perf 00:02:38.433 LINK test_dma 00:02:38.433 LINK nvme_manage 00:02:38.433 LINK bdevio 00:02:38.433 LINK spdk_bdev 00:02:38.694 LINK blobcli 00:02:38.694 LINK nvme_fuzz 00:02:38.694 LINK pci_ut 00:02:38.694 LINK dif 00:02:38.694 LINK spdk_nvme 00:02:38.694 LINK vhost_fuzz 00:02:38.694 LINK spdk_top 00:02:38.694 LINK spdk_nvme_perf 00:02:38.694 LINK spdk_nvme_identify 00:02:38.694 LINK mem_callbacks 00:02:38.955 LINK bdevperf 00:02:38.955 LINK memory_ut 00:02:39.216 LINK cuse 00:02:39.478 LINK iscsi_fuzz 00:02:42.778 LINK esnap 00:02:42.778 00:02:42.778 real 0m50.211s 00:02:42.778 user 6m50.215s 00:02:42.778 sys 5m12.021s 00:02:42.778 14:11:20 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:42.778 14:11:20 make -- common/autotest_common.sh@10 -- $ set +x 00:02:42.778 ************************************ 00:02:42.778 END TEST make 00:02:42.778 ************************************ 00:02:42.778 14:11:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:42.778 14:11:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:42.778 14:11:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:42.778 14:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.778 14:11:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:42.778 14:11:20 -- pm/common@44 -- $ pid=2691801 00:02:42.778 14:11:20 -- pm/common@50 -- $ kill -TERM 2691801 00:02:42.778 14:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.778 14:11:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:42.778 14:11:20 -- pm/common@44 -- $ pid=2691802 00:02:42.778 14:11:20 -- pm/common@50 -- $ kill -TERM 2691802 00:02:42.778 14:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.778 14:11:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:42.778 14:11:20 -- pm/common@44 -- $ pid=2691804 00:02:42.778 14:11:20 -- pm/common@50 -- $ kill -TERM 2691804 00:02:42.778 14:11:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.778 14:11:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:42.778 14:11:20 -- pm/common@44 -- $ pid=2691828 00:02:42.778 14:11:20 -- pm/common@50 -- $ sudo -E kill -TERM 2691828 00:02:42.778 14:11:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.778 14:11:20 -- nvmf/common.sh@7 -- # uname -s 00:02:42.778 14:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.778 14:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.778 14:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.778 14:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.778 14:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.778 14:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.778 14:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.778 14:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.778 14:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.778 14:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.778 14:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:42.778 14:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:42.778 14:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.778 14:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.778 14:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:42.778 14:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:42.778 14:11:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:42.778 14:11:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.778 14:11:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.778 14:11:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.778 14:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.778 14:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.778 14:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.778 14:11:20 -- paths/export.sh@5 -- # export PATH 00:02:42.778 14:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.778 14:11:20 -- nvmf/common.sh@47 -- # : 0 00:02:42.778 14:11:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:42.778 14:11:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:42.778 14:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:42.778 14:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.778 14:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.778 14:11:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:42.778 14:11:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:42.778 14:11:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:42.778 14:11:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.778 14:11:20 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.778 14:11:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.778 14:11:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.778 14:11:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.778 14:11:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.778 14:11:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:42.778 14:11:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.778 14:11:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.778 14:11:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.778 14:11:20 -- spdk/autotest.sh@48 -- # udevadm_pid=2754497 00:02:42.778 14:11:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:42.778 14:11:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.778 14:11:20 -- pm/common@17 -- # local monitor 00:02:42.778 14:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.779 14:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.779 14:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.779 14:11:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.779 14:11:20 -- pm/common@21 -- # date +%s 00:02:42.779 14:11:20 -- pm/common@21 -- # date +%s 00:02:42.779 14:11:20 -- pm/common@25 -- # sleep 1 00:02:42.779 14:11:20 -- pm/common@21 -- # date +%s 00:02:42.779 14:11:20 -- pm/common@21 -- # date +%s 00:02:42.779 14:11:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718021480 00:02:42.779 14:11:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718021480 00:02:42.779 14:11:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718021480 00:02:42.779 14:11:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718021480 00:02:42.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718021480_collect-vmstat.pm.log 00:02:43.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718021480_collect-cpu-load.pm.log 00:02:43.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718021480_collect-cpu-temp.pm.log 00:02:43.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718021480_collect-bmc-pm.bmc.pm.log 00:02:43.983 14:11:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.983 14:11:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:43.983 14:11:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:43.983 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:02:43.983 14:11:21 -- spdk/autotest.sh@59 -- # create_test_list 00:02:43.983 14:11:21 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:43.983 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:02:43.983 14:11:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:43.983 14:11:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.983 14:11:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.983 14:11:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:43.983 14:11:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.983 14:11:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:43.983 14:11:21 -- common/autotest_common.sh@1454 -- # uname 00:02:43.983 14:11:21 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:43.983 14:11:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:43.983 14:11:21 -- common/autotest_common.sh@1474 -- # uname 00:02:43.983 14:11:21 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:43.983 14:11:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:43.983 14:11:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:43.983 14:11:21 -- spdk/autotest.sh@72 -- # hash lcov 00:02:43.983 14:11:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:43.983 14:11:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:43.983 --rc lcov_branch_coverage=1 00:02:43.983 --rc lcov_function_coverage=1 00:02:43.983 --rc genhtml_branch_coverage=1 00:02:43.983 --rc genhtml_function_coverage=1 00:02:43.983 --rc genhtml_legend=1 00:02:43.983 --rc geninfo_all_blocks=1 00:02:43.983 ' 00:02:43.983 14:11:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:43.983 --rc lcov_branch_coverage=1 00:02:43.983 --rc lcov_function_coverage=1 00:02:43.983 --rc genhtml_branch_coverage=1 00:02:43.983 --rc genhtml_function_coverage=1 00:02:43.983 --rc genhtml_legend=1 00:02:43.983 --rc geninfo_all_blocks=1 00:02:43.983 ' 00:02:43.983 14:11:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:43.983 --rc lcov_branch_coverage=1 00:02:43.983 --rc lcov_function_coverage=1 00:02:43.983 --rc genhtml_branch_coverage=1 00:02:43.983 --rc genhtml_function_coverage=1 00:02:43.984 --rc genhtml_legend=1 00:02:43.984 --rc geninfo_all_blocks=1 00:02:43.984 --no-external' 00:02:43.984 14:11:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:43.984 --rc lcov_branch_coverage=1 00:02:43.984 --rc lcov_function_coverage=1 00:02:43.984 --rc genhtml_branch_coverage=1 00:02:43.984 --rc genhtml_function_coverage=1 00:02:43.984 --rc genhtml_legend=1 00:02:43.984 --rc geninfo_all_blocks=1 00:02:43.984 --no-external' 00:02:43.984 14:11:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:43.984 lcov: LCOV version 1.14 00:02:43.984 14:11:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:11.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:11.143 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:11.144 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:11.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:11.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:13.061 14:11:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:13.061 14:11:50 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:13.061 14:11:50 -- common/autotest_common.sh@10 -- # set +x 00:03:13.061 14:11:50 -- spdk/autotest.sh@91 -- # rm -f 00:03:13.061 14:11:50 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.421 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:16.421 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:16.421 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:16.421 14:11:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:16.421 14:11:53 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:16.421 14:11:53 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:16.421 14:11:53 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:16.421 14:11:53 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:16.421 14:11:53 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:16.421 14:11:53 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:16.421 14:11:53 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.421 14:11:53 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:16.421 14:11:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:16.421 14:11:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:16.421 14:11:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:16.421 14:11:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:16.421 14:11:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:16.421 14:11:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:16.421 No valid GPT data, bailing 00:03:16.421 14:11:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.421 14:11:53 -- scripts/common.sh@391 -- # pt= 00:03:16.421 14:11:53 -- scripts/common.sh@392 -- # return 1 00:03:16.421 14:11:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:16.421 1+0 records in 00:03:16.421 1+0 records out 00:03:16.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392986 s, 267 MB/s 00:03:16.421 14:11:53 -- spdk/autotest.sh@118 -- # sync 00:03:16.422 14:11:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:16.422 14:11:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:16.422 14:11:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:24.559 14:12:01 -- spdk/autotest.sh@124 -- # uname -s 00:03:24.559 14:12:01 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:24.559 14:12:01 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.559 14:12:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.559 14:12:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.559 14:12:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.559 ************************************ 00:03:24.559 START TEST setup.sh 00:03:24.559 ************************************ 00:03:24.559 14:12:01 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:24.559 * Looking for test storage... 00:03:24.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.559 14:12:01 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:24.559 14:12:01 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:24.559 14:12:01 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:24.559 14:12:01 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:24.559 14:12:01 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:24.559 14:12:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.559 ************************************ 00:03:24.560 START TEST acl 00:03:24.560 ************************************ 00:03:24.560 14:12:01 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:24.560 * Looking for test storage... 00:03:24.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.560 14:12:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:24.560 14:12:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:24.560 14:12:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.560 14:12:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.762 14:12:05 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.762 14:12:05 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.762 14:12:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.762 14:12:05 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.762 14:12:05 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.762 14:12:05 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:32.061 Hugepages 00:03:32.061 node hugesize free / total 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 00:03:32.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:32.061 14:12:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:32.061 14:12:09 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.061 14:12:09 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.061 14:12:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.061 ************************************ 00:03:32.061 START TEST denied 00:03:32.061 ************************************ 00:03:32.061 14:12:09 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:32.061 14:12:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:32.061 14:12:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:32.061 14:12:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.061 14:12:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.061 14:12:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:35.361 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.361 14:12:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.565 00:03:39.565 real 0m7.718s 00:03:39.565 user 0m2.442s 00:03:39.565 sys 0m4.407s 00:03:39.565 14:12:16 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:39.565 14:12:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.565 ************************************ 00:03:39.565 END TEST denied 00:03:39.565 ************************************ 00:03:39.565 14:12:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.565 14:12:16 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:39.565 14:12:16 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:39.565 14:12:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.565 ************************************ 00:03:39.565 START TEST allowed 00:03:39.565 ************************************ 00:03:39.565 14:12:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:39.565 14:12:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:39.565 14:12:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.565 14:12:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:39.565 14:12:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.565 14:12:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.856 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.856 14:12:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:44.856 14:12:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:44.856 14:12:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:44.856 14:12:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.856 14:12:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.063 00:03:49.063 real 0m8.922s 00:03:49.063 user 0m2.637s 00:03:49.063 sys 0m4.585s 00:03:49.063 14:12:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:49.063 14:12:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:49.063 ************************************ 00:03:49.063 END TEST allowed 00:03:49.063 ************************************ 00:03:49.063 00:03:49.063 real 0m24.065s 00:03:49.063 user 0m7.841s 00:03:49.063 sys 0m13.864s 00:03:49.063 14:12:25 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:49.063 14:12:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:49.063 ************************************ 00:03:49.063 END TEST acl 00:03:49.063 ************************************ 00:03:49.063 14:12:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:49.064 14:12:26 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:49.064 14:12:26 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:49.064 14:12:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.064 ************************************ 00:03:49.064 START TEST hugepages 00:03:49.064 ************************************ 00:03:49.064 14:12:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:49.064 * Looking for test storage... 00:03:49.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 108420100 kB' 'MemAvailable: 111609380 kB' 'Buffers: 2704 kB' 'Cached: 9350572 kB' 'SwapCached: 0 kB' 'Active: 6358612 kB' 'Inactive: 3492476 kB' 'Active(anon): 5967852 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501248 kB' 'Mapped: 190960 kB' 'Shmem: 5470040 kB' 'KReclaimable: 260748 kB' 'Slab: 992668 kB' 'SReclaimable: 260748 kB' 'SUnreclaim: 731920 kB' 'KernelStack: 27216 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460868 kB' 'Committed_AS: 7400896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234976 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.064 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:49.065 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:49.066 14:12:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:49.066 14:12:26 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:49.066 14:12:26 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:49.066 14:12:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.066 ************************************ 00:03:49.066 START TEST default_setup 00:03:49.066 ************************************ 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.066 14:12:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.369 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:52.369 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110582840 kB' 'MemAvailable: 113771624 kB' 'Buffers: 2704 kB' 'Cached: 9350692 kB' 'SwapCached: 0 kB' 'Active: 6377168 kB' 'Inactive: 3492476 kB' 'Active(anon): 5986408 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519092 kB' 'Mapped: 191756 kB' 'Shmem: 5470160 kB' 'KReclaimable: 259756 kB' 'Slab: 990436 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730680 kB' 'KernelStack: 27120 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7423448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234928 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.369 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.370 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110582432 kB' 'MemAvailable: 113771216 kB' 'Buffers: 2704 kB' 'Cached: 9350696 kB' 'SwapCached: 0 kB' 'Active: 6381072 kB' 'Inactive: 3492476 kB' 'Active(anon): 5990312 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523692 kB' 'Mapped: 191788 kB' 'Shmem: 5470164 kB' 'KReclaimable: 259756 kB' 'Slab: 990396 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730640 kB' 'KernelStack: 27152 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7427060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234852 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.371 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.372 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110582544 kB' 'MemAvailable: 113771328 kB' 'Buffers: 2704 kB' 'Cached: 9350700 kB' 'SwapCached: 0 kB' 'Active: 6375600 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984840 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518164 kB' 'Mapped: 191148 kB' 'Shmem: 5470168 kB' 'KReclaimable: 259756 kB' 'Slab: 990412 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730656 kB' 'KernelStack: 27136 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7420960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.373 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.374 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.375 nr_hugepages=1024 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.375 resv_hugepages=0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.375 surplus_hugepages=0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.375 anon_hugepages=0 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110582828 kB' 'MemAvailable: 113771612 kB' 'Buffers: 2704 kB' 'Cached: 9350752 kB' 'SwapCached: 0 kB' 'Active: 6375072 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984312 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517524 kB' 'Mapped: 191148 kB' 'Shmem: 5470220 kB' 'KReclaimable: 259756 kB' 'Slab: 990412 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730656 kB' 'KernelStack: 27120 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7420980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234848 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.375 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.376 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53292788 kB' 'MemUsed: 12366220 kB' 'SwapCached: 0 kB' 'Active: 4516488 kB' 'Inactive: 3314440 kB' 'Active(anon): 4393532 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546904 kB' 'Mapped: 110096 kB' 'AnonPages: 287392 kB' 'Shmem: 4109508 kB' 'KernelStack: 13672 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 559680 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 435604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.377 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.378 node0=1024 expecting 1024 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.378 00:03:52.378 real 0m3.617s 00:03:52.378 user 0m1.394s 00:03:52.378 sys 0m2.218s 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:52.378 14:12:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:52.378 ************************************ 00:03:52.378 END TEST default_setup 00:03:52.378 ************************************ 00:03:52.378 14:12:29 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:52.378 14:12:29 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:52.378 14:12:29 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:52.378 14:12:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.378 ************************************ 00:03:52.378 START TEST per_node_1G_alloc 00:03:52.378 ************************************ 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:52.378 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.379 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.639 14:12:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.011 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:56.011 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.011 14:12:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110599820 kB' 'MemAvailable: 113788604 kB' 'Buffers: 2704 kB' 'Cached: 9350848 kB' 'SwapCached: 0 kB' 'Active: 6382832 kB' 'Inactive: 3492476 kB' 'Active(anon): 5992072 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525232 kB' 'Mapped: 191712 kB' 'Shmem: 5470316 kB' 'KReclaimable: 259756 kB' 'Slab: 990100 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730344 kB' 'KernelStack: 27296 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7430684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235060 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.011 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.012 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110600344 kB' 'MemAvailable: 113789128 kB' 'Buffers: 2704 kB' 'Cached: 9350852 kB' 'SwapCached: 0 kB' 'Active: 6375036 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984276 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516760 kB' 'Mapped: 191088 kB' 'Shmem: 5470320 kB' 'KReclaimable: 259756 kB' 'Slab: 990032 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730276 kB' 'KernelStack: 27296 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7412320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235040 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.013 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.014 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110603228 kB' 'MemAvailable: 113792012 kB' 'Buffers: 2704 kB' 'Cached: 9350868 kB' 'SwapCached: 0 kB' 'Active: 6373896 kB' 'Inactive: 3492476 kB' 'Active(anon): 5983136 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516232 kB' 'Mapped: 190040 kB' 'Shmem: 5470336 kB' 'KReclaimable: 259756 kB' 'Slab: 990008 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730252 kB' 'KernelStack: 27248 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7408788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235008 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.015 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.016 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.017 nr_hugepages=1024 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.017 resv_hugepages=0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.017 surplus_hugepages=0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.017 anon_hugepages=0 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110604564 kB' 'MemAvailable: 113793348 kB' 'Buffers: 2704 kB' 'Cached: 9350872 kB' 'SwapCached: 0 kB' 'Active: 6373884 kB' 'Inactive: 3492476 kB' 'Active(anon): 5983124 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516248 kB' 'Mapped: 190040 kB' 'Shmem: 5470340 kB' 'KReclaimable: 259756 kB' 'Slab: 990004 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730248 kB' 'KernelStack: 27056 kB' 'PageTables: 7272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7407572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234864 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.017 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.018 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54349508 kB' 'MemUsed: 11309500 kB' 'SwapCached: 0 kB' 'Active: 4516172 kB' 'Inactive: 3314440 kB' 'Active(anon): 4393216 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547028 kB' 'Mapped: 109268 kB' 'AnonPages: 286964 kB' 'Shmem: 4109632 kB' 'KernelStack: 13624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 559464 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 435388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.019 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.020 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 56255392 kB' 'MemUsed: 4424436 kB' 'SwapCached: 0 kB' 'Active: 1857552 kB' 'Inactive: 178036 kB' 'Active(anon): 1589748 kB' 'Inactive(anon): 0 kB' 'Active(file): 267804 kB' 'Inactive(file): 178036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1806572 kB' 'Mapped: 80772 kB' 'AnonPages: 229100 kB' 'Shmem: 1360732 kB' 'KernelStack: 13464 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135680 kB' 'Slab: 430592 kB' 'SReclaimable: 135680 kB' 'SUnreclaim: 294912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.021 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.022 node0=512 expecting 512 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.022 node1=512 expecting 512 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.022 00:03:56.022 real 0m3.245s 00:03:56.022 user 0m1.204s 00:03:56.022 sys 0m2.018s 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:56.022 14:12:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.022 ************************************ 00:03:56.022 END TEST per_node_1G_alloc 00:03:56.022 ************************************ 00:03:56.022 14:12:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.022 14:12:33 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:56.022 14:12:33 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:56.022 14:12:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.022 ************************************ 00:03:56.022 START TEST even_2G_alloc 00:03:56.022 ************************************ 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.022 14:12:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.329 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.329 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.329 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.329 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.329 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.330 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110619904 kB' 'MemAvailable: 113808688 kB' 'Buffers: 2704 kB' 'Cached: 9351028 kB' 'SwapCached: 0 kB' 'Active: 6375412 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984652 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516968 kB' 'Mapped: 190188 kB' 'Shmem: 5470496 kB' 'KReclaimable: 259756 kB' 'Slab: 989904 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730148 kB' 'KernelStack: 27168 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7408316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234864 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.330 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110620900 kB' 'MemAvailable: 113809684 kB' 'Buffers: 2704 kB' 'Cached: 9351032 kB' 'SwapCached: 0 kB' 'Active: 6374672 kB' 'Inactive: 3492476 kB' 'Active(anon): 5983912 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516652 kB' 'Mapped: 190104 kB' 'Shmem: 5470500 kB' 'KReclaimable: 259756 kB' 'Slab: 989908 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730152 kB' 'KernelStack: 27152 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7408336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234848 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.331 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.332 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110620144 kB' 'MemAvailable: 113808928 kB' 'Buffers: 2704 kB' 'Cached: 9351032 kB' 'SwapCached: 0 kB' 'Active: 6374320 kB' 'Inactive: 3492476 kB' 'Active(anon): 5983560 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516300 kB' 'Mapped: 190104 kB' 'Shmem: 5470500 kB' 'KReclaimable: 259756 kB' 'Slab: 989908 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730152 kB' 'KernelStack: 27136 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7408356 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234848 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.333 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.334 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.335 nr_hugepages=1024 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.335 resv_hugepages=0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.335 surplus_hugepages=0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.335 anon_hugepages=0 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.335 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110620824 kB' 'MemAvailable: 113809608 kB' 'Buffers: 2704 kB' 'Cached: 9351072 kB' 'SwapCached: 0 kB' 'Active: 6374504 kB' 'Inactive: 3492476 kB' 'Active(anon): 5983744 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516444 kB' 'Mapped: 190104 kB' 'Shmem: 5470540 kB' 'KReclaimable: 259756 kB' 'Slab: 989908 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730152 kB' 'KernelStack: 27136 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7408380 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234848 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.336 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.337 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54364528 kB' 'MemUsed: 11294480 kB' 'SwapCached: 0 kB' 'Active: 4517044 kB' 'Inactive: 3314440 kB' 'Active(anon): 4394088 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547224 kB' 'Mapped: 109324 kB' 'AnonPages: 287464 kB' 'Shmem: 4109828 kB' 'KernelStack: 13720 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 559212 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 435136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.338 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.339 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 56256548 kB' 'MemUsed: 4423280 kB' 'SwapCached: 0 kB' 'Active: 1857308 kB' 'Inactive: 178036 kB' 'Active(anon): 1589504 kB' 'Inactive(anon): 0 kB' 'Active(file): 267804 kB' 'Inactive(file): 178036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1806588 kB' 'Mapped: 80780 kB' 'AnonPages: 228788 kB' 'Shmem: 1360748 kB' 'KernelStack: 13416 kB' 'PageTables: 3388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135680 kB' 'Slab: 430696 kB' 'SReclaimable: 135680 kB' 'SUnreclaim: 295016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.340 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.341 node0=512 expecting 512 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.341 node1=512 expecting 512 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.341 00:03:59.341 real 0m3.487s 00:03:59.341 user 0m1.387s 00:03:59.341 sys 0m2.162s 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:59.341 14:12:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.341 ************************************ 00:03:59.341 END TEST even_2G_alloc 00:03:59.341 ************************************ 00:03:59.341 14:12:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:59.341 14:12:36 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:59.341 14:12:36 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:59.341 14:12:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.341 ************************************ 00:03:59.341 START TEST odd_alloc 00:03:59.341 ************************************ 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.341 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.342 14:12:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.647 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.647 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110622476 kB' 'MemAvailable: 113811260 kB' 'Buffers: 2704 kB' 'Cached: 9351208 kB' 'SwapCached: 0 kB' 'Active: 6377476 kB' 'Inactive: 3492476 kB' 'Active(anon): 5986716 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519324 kB' 'Mapped: 190220 kB' 'Shmem: 5470676 kB' 'KReclaimable: 259756 kB' 'Slab: 989620 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 729864 kB' 'KernelStack: 27056 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7409264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234704 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.648 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110623332 kB' 'MemAvailable: 113812116 kB' 'Buffers: 2704 kB' 'Cached: 9351212 kB' 'SwapCached: 0 kB' 'Active: 6376672 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985912 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518872 kB' 'Mapped: 190108 kB' 'Shmem: 5470680 kB' 'KReclaimable: 259756 kB' 'Slab: 989612 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 729856 kB' 'KernelStack: 27056 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7409280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234704 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.649 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110622400 kB' 'MemAvailable: 113811184 kB' 'Buffers: 2704 kB' 'Cached: 9351212 kB' 'SwapCached: 0 kB' 'Active: 6376652 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985892 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518824 kB' 'Mapped: 190108 kB' 'Shmem: 5470680 kB' 'KReclaimable: 259756 kB' 'Slab: 989612 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 729856 kB' 'KernelStack: 27072 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7409304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234704 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.653 nr_hugepages=1025 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.653 resv_hugepages=0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.653 surplus_hugepages=0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.653 anon_hugepages=0 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110623056 kB' 'MemAvailable: 113811840 kB' 'Buffers: 2704 kB' 'Cached: 9351212 kB' 'SwapCached: 0 kB' 'Active: 6376400 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985640 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518564 kB' 'Mapped: 190108 kB' 'Shmem: 5470680 kB' 'KReclaimable: 259756 kB' 'Slab: 989612 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 729856 kB' 'KernelStack: 27056 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7409324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234704 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.653 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.916 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.917 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54365044 kB' 'MemUsed: 11293964 kB' 'SwapCached: 0 kB' 'Active: 4515804 kB' 'Inactive: 3314440 kB' 'Active(anon): 4392848 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547284 kB' 'Mapped: 109328 kB' 'AnonPages: 286412 kB' 'Shmem: 4109888 kB' 'KernelStack: 13592 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 558916 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 434840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.918 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 56258876 kB' 'MemUsed: 4420952 kB' 'SwapCached: 0 kB' 'Active: 1860584 kB' 'Inactive: 178036 kB' 'Active(anon): 1592780 kB' 'Inactive(anon): 0 kB' 'Active(file): 267804 kB' 'Inactive(file): 178036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1806708 kB' 'Mapped: 80780 kB' 'AnonPages: 232108 kB' 'Shmem: 1360868 kB' 'KernelStack: 13464 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135680 kB' 'Slab: 430696 kB' 'SReclaimable: 135680 kB' 'SUnreclaim: 295016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:02.921 node0=512 expecting 513 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:02.921 node1=513 expecting 512 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:02.921 00:04:02.921 real 0m3.474s 00:04:02.921 user 0m1.366s 00:04:02.921 sys 0m2.172s 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:02.921 14:12:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.921 ************************************ 00:04:02.921 END TEST odd_alloc 00:04:02.921 ************************************ 00:04:02.921 14:12:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.921 14:12:40 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:02.921 14:12:40 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:02.921 14:12:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.921 ************************************ 00:04:02.921 START TEST custom_alloc 00:04:02.921 ************************************ 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:02.921 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.922 14:12:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.223 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.223 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.223 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 109564536 kB' 'MemAvailable: 112753320 kB' 'Buffers: 2704 kB' 'Cached: 9351380 kB' 'SwapCached: 0 kB' 'Active: 6376340 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985580 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517540 kB' 'Mapped: 190192 kB' 'Shmem: 5470848 kB' 'KReclaimable: 259756 kB' 'Slab: 990440 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730684 kB' 'KernelStack: 27088 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7410096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234880 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.224 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 109566404 kB' 'MemAvailable: 112755188 kB' 'Buffers: 2704 kB' 'Cached: 9351384 kB' 'SwapCached: 0 kB' 'Active: 6375960 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985200 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517228 kB' 'Mapped: 190192 kB' 'Shmem: 5470852 kB' 'KReclaimable: 259756 kB' 'Slab: 990440 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730684 kB' 'KernelStack: 27072 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7410116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.225 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.226 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 109565236 kB' 'MemAvailable: 112754020 kB' 'Buffers: 2704 kB' 'Cached: 9351384 kB' 'SwapCached: 0 kB' 'Active: 6375504 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984744 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517236 kB' 'Mapped: 190116 kB' 'Shmem: 5470852 kB' 'KReclaimable: 259756 kB' 'Slab: 990432 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730676 kB' 'KernelStack: 27088 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7410136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.227 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.228 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:06.229 nr_hugepages=1536 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.229 resv_hugepages=0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.229 surplus_hugepages=0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.229 anon_hugepages=0 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 109566100 kB' 'MemAvailable: 112754884 kB' 'Buffers: 2704 kB' 'Cached: 9351420 kB' 'SwapCached: 0 kB' 'Active: 6375424 kB' 'Inactive: 3492476 kB' 'Active(anon): 5984664 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517092 kB' 'Mapped: 190116 kB' 'Shmem: 5470888 kB' 'KReclaimable: 259756 kB' 'Slab: 990432 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730676 kB' 'KernelStack: 27056 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7411176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.229 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.230 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.494 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54362444 kB' 'MemUsed: 11296564 kB' 'SwapCached: 0 kB' 'Active: 4523172 kB' 'Inactive: 3314440 kB' 'Active(anon): 4400216 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547332 kB' 'Mapped: 109344 kB' 'AnonPages: 293516 kB' 'Shmem: 4109936 kB' 'KernelStack: 13608 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 559880 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 435804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.495 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 55197488 kB' 'MemUsed: 5482340 kB' 'SwapCached: 0 kB' 'Active: 1857980 kB' 'Inactive: 178036 kB' 'Active(anon): 1590176 kB' 'Inactive(anon): 0 kB' 'Active(file): 267804 kB' 'Inactive(file): 178036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1806836 kB' 'Mapped: 81624 kB' 'AnonPages: 229276 kB' 'Shmem: 1360996 kB' 'KernelStack: 13448 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135680 kB' 'Slab: 430552 kB' 'SReclaimable: 135680 kB' 'SUnreclaim: 294872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.496 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.497 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.498 node0=512 expecting 512 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:06.498 node1=1024 expecting 1024 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:06.498 00:04:06.498 real 0m3.481s 00:04:06.498 user 0m1.369s 00:04:06.498 sys 0m2.170s 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:06.498 14:12:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.498 ************************************ 00:04:06.498 END TEST custom_alloc 00:04:06.498 ************************************ 00:04:06.498 14:12:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.498 14:12:43 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:06.498 14:12:43 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:06.498 14:12:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.498 ************************************ 00:04:06.498 START TEST no_shrink_alloc 00:04:06.498 ************************************ 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.498 14:12:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.812 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.812 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:09.813 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110627024 kB' 'MemAvailable: 113816312 kB' 'Buffers: 2704 kB' 'Cached: 9351556 kB' 'SwapCached: 0 kB' 'Active: 6376748 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985988 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518208 kB' 'Mapped: 190472 kB' 'Shmem: 5471024 kB' 'KReclaimable: 259756 kB' 'Slab: 990680 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730924 kB' 'KernelStack: 27072 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234848 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.813 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110628584 kB' 'MemAvailable: 113817368 kB' 'Buffers: 2704 kB' 'Cached: 9351560 kB' 'SwapCached: 0 kB' 'Active: 6376528 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985768 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518044 kB' 'Mapped: 190120 kB' 'Shmem: 5471028 kB' 'KReclaimable: 259756 kB' 'Slab: 990708 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730952 kB' 'KernelStack: 27104 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.814 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.815 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110628584 kB' 'MemAvailable: 113817368 kB' 'Buffers: 2704 kB' 'Cached: 9351580 kB' 'SwapCached: 0 kB' 'Active: 6376452 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985692 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517896 kB' 'Mapped: 190120 kB' 'Shmem: 5471048 kB' 'KReclaimable: 259756 kB' 'Slab: 990708 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730952 kB' 'KernelStack: 27088 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.816 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.817 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.818 nr_hugepages=1024 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.818 resv_hugepages=0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.818 surplus_hugepages=0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.818 anon_hugepages=0 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.818 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110629648 kB' 'MemAvailable: 113818432 kB' 'Buffers: 2704 kB' 'Cached: 9351600 kB' 'SwapCached: 0 kB' 'Active: 6376564 kB' 'Inactive: 3492476 kB' 'Active(anon): 5985804 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518044 kB' 'Mapped: 190120 kB' 'Shmem: 5471068 kB' 'KReclaimable: 259756 kB' 'Slab: 990708 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730952 kB' 'KernelStack: 27104 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234832 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.819 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.820 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53335640 kB' 'MemUsed: 12323368 kB' 'SwapCached: 0 kB' 'Active: 4521272 kB' 'Inactive: 3314440 kB' 'Active(anon): 4398316 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547340 kB' 'Mapped: 109348 kB' 'AnonPages: 291496 kB' 'Shmem: 4109944 kB' 'KernelStack: 13608 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 560308 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 436232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.821 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.822 node0=1024 expecting 1024 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.822 14:12:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.131 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:13.131 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:13.131 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110631828 kB' 'MemAvailable: 113820612 kB' 'Buffers: 2704 kB' 'Cached: 9351700 kB' 'SwapCached: 0 kB' 'Active: 6378964 kB' 'Inactive: 3492476 kB' 'Active(anon): 5988204 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520492 kB' 'Mapped: 190156 kB' 'Shmem: 5471168 kB' 'KReclaimable: 259756 kB' 'Slab: 990744 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730988 kB' 'KernelStack: 27040 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234768 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.131 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.132 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110634556 kB' 'MemAvailable: 113823340 kB' 'Buffers: 2704 kB' 'Cached: 9351704 kB' 'SwapCached: 0 kB' 'Active: 6378308 kB' 'Inactive: 3492476 kB' 'Active(anon): 5987548 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519832 kB' 'Mapped: 190156 kB' 'Shmem: 5471172 kB' 'KReclaimable: 259756 kB' 'Slab: 990736 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 730980 kB' 'KernelStack: 27056 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234752 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.133 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.134 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110635368 kB' 'MemAvailable: 113824152 kB' 'Buffers: 2704 kB' 'Cached: 9351720 kB' 'SwapCached: 0 kB' 'Active: 6378140 kB' 'Inactive: 3492476 kB' 'Active(anon): 5987380 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519588 kB' 'Mapped: 190156 kB' 'Shmem: 5471188 kB' 'KReclaimable: 259756 kB' 'Slab: 990792 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 731036 kB' 'KernelStack: 27040 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7411544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234752 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.135 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.136 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.137 nr_hugepages=1024 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.137 resv_hugepages=0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.137 surplus_hugepages=0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.137 anon_hugepages=0 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 110635700 kB' 'MemAvailable: 113824484 kB' 'Buffers: 2704 kB' 'Cached: 9351744 kB' 'SwapCached: 0 kB' 'Active: 6378148 kB' 'Inactive: 3492476 kB' 'Active(anon): 5987388 kB' 'Inactive(anon): 0 kB' 'Active(file): 390760 kB' 'Inactive(file): 3492476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519500 kB' 'Mapped: 190156 kB' 'Shmem: 5471212 kB' 'KReclaimable: 259756 kB' 'Slab: 990792 kB' 'SReclaimable: 259756 kB' 'SUnreclaim: 731036 kB' 'KernelStack: 27056 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7412068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234720 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3444084 kB' 'DirectMap2M: 16158720 kB' 'DirectMap1G: 116391936 kB' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.137 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.138 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53341384 kB' 'MemUsed: 12317624 kB' 'SwapCached: 0 kB' 'Active: 4523640 kB' 'Inactive: 3314440 kB' 'Active(anon): 4400684 kB' 'Inactive(anon): 0 kB' 'Active(file): 122956 kB' 'Inactive(file): 3314440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7547372 kB' 'Mapped: 109416 kB' 'AnonPages: 294004 kB' 'Shmem: 4109976 kB' 'KernelStack: 13608 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124076 kB' 'Slab: 560432 kB' 'SReclaimable: 124076 kB' 'SUnreclaim: 436356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.139 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.140 node0=1024 expecting 1024 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.140 00:04:13.140 real 0m6.714s 00:04:13.140 user 0m2.574s 00:04:13.140 sys 0m4.196s 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.140 14:12:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.140 ************************************ 00:04:13.140 END TEST no_shrink_alloc 00:04:13.140 ************************************ 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.140 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:13.401 14:12:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:13.401 00:04:13.401 real 0m24.657s 00:04:13.401 user 0m9.546s 00:04:13.401 sys 0m15.356s 00:04:13.401 14:12:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.401 14:12:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.401 ************************************ 00:04:13.401 END TEST hugepages 00:04:13.401 ************************************ 00:04:13.401 14:12:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.401 14:12:50 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.401 14:12:50 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.401 14:12:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.401 ************************************ 00:04:13.401 START TEST driver 00:04:13.401 ************************************ 00:04:13.401 14:12:50 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:13.401 * Looking for test storage... 00:04:13.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.401 14:12:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:13.401 14:12:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.401 14:12:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.679 14:12:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:17.679 14:12:55 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:17.679 14:12:55 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.679 14:12:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.679 ************************************ 00:04:17.679 START TEST guess_driver 00:04:17.679 ************************************ 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:17.679 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:17.680 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:17.680 Looking for driver=vfio-pci 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.680 14:12:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:20.986 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.247 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.248 14:12:58 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.570 00:04:26.570 real 0m8.036s 00:04:26.570 user 0m2.482s 00:04:26.570 sys 0m4.670s 00:04:26.570 14:13:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:26.570 14:13:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.570 ************************************ 00:04:26.571 END TEST guess_driver 00:04:26.571 ************************************ 00:04:26.571 00:04:26.571 real 0m12.476s 00:04:26.571 user 0m3.604s 00:04:26.571 sys 0m7.053s 00:04:26.571 14:13:03 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:26.571 14:13:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:26.571 ************************************ 00:04:26.571 END TEST driver 00:04:26.571 ************************************ 00:04:26.571 14:13:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:26.571 14:13:03 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:26.571 14:13:03 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:26.571 14:13:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.571 ************************************ 00:04:26.571 START TEST devices 00:04:26.571 ************************************ 00:04:26.571 14:13:03 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:26.571 * Looking for test storage... 00:04:26.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.571 14:13:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:26.571 14:13:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:26.571 14:13:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.571 14:13:03 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:29.872 14:13:07 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:29.872 No valid GPT data, bailing 00:04:29.872 14:13:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:29.872 14:13:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:29.872 14:13:07 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.872 14:13:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.872 ************************************ 00:04:29.872 START TEST nvme_mount 00:04:29.872 ************************************ 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.872 14:13:07 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.812 Creating new GPT entries in memory. 00:04:30.812 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.812 other utilities. 00:04:30.812 14:13:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.812 14:13:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.812 14:13:08 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.812 14:13:08 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.812 14:13:08 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.753 Creating new GPT entries in memory. 00:04:31.753 The operation has completed successfully. 00:04:31.753 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.753 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.753 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2794034 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.014 14:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:35.312 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.312 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.573 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.573 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.573 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.573 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.573 14:13:12 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:35.573 14:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:35.573 14:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.573 14:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:35.573 14:13:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.573 14:13:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.871 14:13:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.169 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.170 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.170 00:04:42.170 real 0m12.281s 00:04:42.170 user 0m3.684s 00:04:42.170 sys 0m6.472s 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:42.170 14:13:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.170 ************************************ 00:04:42.170 END TEST nvme_mount 00:04:42.170 ************************************ 00:04:42.170 14:13:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.170 14:13:19 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.170 14:13:19 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.170 14:13:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.170 ************************************ 00:04:42.170 START TEST dm_mount 00:04:42.170 ************************************ 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.170 14:13:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:43.110 Creating new GPT entries in memory. 00:04:43.110 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.110 other utilities. 00:04:43.110 14:13:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.110 14:13:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.110 14:13:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.110 14:13:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.110 14:13:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:44.493 Creating new GPT entries in memory. 00:04:44.493 The operation has completed successfully. 00:04:44.493 14:13:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.493 14:13:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.493 14:13:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.493 14:13:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.493 14:13:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.438 The operation has completed successfully. 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2798889 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.438 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.439 14:13:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.782 14:13:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.084 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.084 00:04:52.084 real 0m9.709s 00:04:52.084 user 0m2.468s 00:04:52.084 sys 0m4.278s 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.084 14:13:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.084 ************************************ 00:04:52.084 END TEST dm_mount 00:04:52.084 ************************************ 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.084 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.084 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.084 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.084 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.084 14:13:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.345 00:04:52.345 real 0m26.317s 00:04:52.345 user 0m7.621s 00:04:52.345 sys 0m13.480s 00:04:52.345 14:13:29 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.345 14:13:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.345 ************************************ 00:04:52.345 END TEST devices 00:04:52.345 ************************************ 00:04:52.345 00:04:52.345 real 1m27.953s 00:04:52.345 user 0m28.781s 00:04:52.345 sys 0m50.045s 00:04:52.345 14:13:29 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.345 14:13:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.345 ************************************ 00:04:52.345 END TEST setup.sh 00:04:52.345 ************************************ 00:04:52.345 14:13:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:55.666 Hugepages 00:04:55.666 node hugesize free / total 00:04:55.666 node0 1048576kB 0 / 0 00:04:55.666 node0 2048kB 2048 / 2048 00:04:55.666 node1 1048576kB 0 / 0 00:04:55.666 node1 2048kB 0 / 0 00:04:55.666 00:04:55.666 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.666 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:55.666 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:55.927 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:55.927 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:55.927 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:55.927 14:13:33 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.927 14:13:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.927 14:13:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.927 14:13:33 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.473 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.473 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.473 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.473 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.473 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.473 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.733 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.644 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.644 14:13:37 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:01.585 14:13:38 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:01.585 14:13:38 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:01.585 14:13:38 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.585 14:13:38 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:01.585 14:13:38 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:01.585 14:13:38 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:01.585 14:13:38 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.585 14:13:38 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.585 14:13:38 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:01.585 14:13:39 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:01.585 14:13:39 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:01.585 14:13:39 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.886 Waiting for block devices as requested 00:05:04.886 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.886 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:04.886 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:04.886 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.146 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.146 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.146 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.408 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:05.408 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:05.408 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:05.670 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.670 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.670 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.930 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.930 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.930 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:06.189 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.189 14:13:43 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:06.189 14:13:43 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:05:06.189 14:13:43 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:06.189 14:13:43 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:06.189 14:13:43 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:06.189 14:13:43 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:06.189 14:13:43 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:05:06.189 14:13:43 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:06.189 14:13:43 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:06.189 14:13:43 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:06.189 14:13:43 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:06.189 14:13:43 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:06.189 14:13:43 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:06.189 14:13:43 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:06.189 14:13:43 -- common/autotest_common.sh@1556 -- # continue 00:05:06.189 14:13:43 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.189 14:13:43 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:06.189 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 14:13:43 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.189 14:13:43 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:06.189 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:06.189 14:13:43 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.484 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.484 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.745 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:09.745 14:13:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:09.745 14:13:47 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:09.745 14:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.745 14:13:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:09.745 14:13:47 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:09.745 14:13:47 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.745 14:13:47 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:09.745 14:13:47 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:09.745 14:13:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:09.745 14:13:47 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:09.745 14:13:47 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:09.745 14:13:47 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.745 14:13:47 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.745 14:13:47 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:10.006 14:13:47 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:10.006 14:13:47 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:10.006 14:13:47 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:10.006 14:13:47 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:10.006 14:13:47 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:05:10.006 14:13:47 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:10.006 14:13:47 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:05:10.006 14:13:47 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:05:10.006 14:13:47 -- common/autotest_common.sh@1592 -- # return 0 00:05:10.006 14:13:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:10.006 14:13:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:10.006 14:13:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.006 14:13:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.006 14:13:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:10.006 14:13:47 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:10.006 14:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.006 14:13:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:10.006 14:13:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.006 14:13:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:10.007 14:13:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:10.007 14:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.007 ************************************ 00:05:10.007 START TEST env 00:05:10.007 ************************************ 00:05:10.007 14:13:47 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.007 * Looking for test storage... 00:05:10.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.007 14:13:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.007 14:13:47 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:10.007 14:13:47 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:10.007 14:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.007 ************************************ 00:05:10.007 START TEST env_memory 00:05:10.007 ************************************ 00:05:10.007 14:13:47 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.007 00:05:10.007 00:05:10.007 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.007 http://cunit.sourceforge.net/ 00:05:10.007 00:05:10.007 00:05:10.007 Suite: memory 00:05:10.007 Test: alloc and free memory map ...[2024-06-10 14:13:47.578169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.007 passed 00:05:10.007 Test: mem map translation ...[2024-06-10 14:13:47.595718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.007 [2024-06-10 14:13:47.595736] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.007 [2024-06-10 14:13:47.595770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.007 [2024-06-10 14:13:47.595775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.268 passed 00:05:10.268 Test: mem map registration ...[2024-06-10 14:13:47.633608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.268 [2024-06-10 14:13:47.633619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.268 passed 00:05:10.268 Test: mem map adjacent registrations ...passed 00:05:10.268 00:05:10.268 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.268 suites 1 1 n/a 0 0 00:05:10.268 tests 4 4 4 0 0 00:05:10.268 asserts 152 152 152 0 n/a 00:05:10.268 00:05:10.268 Elapsed time = 0.125 seconds 00:05:10.268 00:05:10.268 real 0m0.130s 00:05:10.268 user 0m0.123s 00:05:10.268 sys 0m0.006s 00:05:10.268 14:13:47 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:10.268 14:13:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.268 ************************************ 00:05:10.268 END TEST env_memory 00:05:10.268 ************************************ 00:05:10.268 14:13:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.268 14:13:47 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:10.268 14:13:47 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:10.268 14:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.268 ************************************ 00:05:10.268 START TEST env_vtophys 00:05:10.268 ************************************ 00:05:10.268 14:13:47 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.268 EAL: lib.eal log level changed from notice to debug 00:05:10.268 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.268 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.268 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.268 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.268 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.268 EAL: Detected lcore 5 as core 5 on socket 0 00:05:10.268 EAL: Detected lcore 6 as core 6 on socket 0 00:05:10.268 EAL: Detected lcore 7 as core 7 on socket 0 00:05:10.268 EAL: Detected lcore 8 as core 8 on socket 0 00:05:10.268 EAL: Detected lcore 9 as core 9 on socket 0 00:05:10.268 EAL: Detected lcore 10 as core 10 on socket 0 00:05:10.268 EAL: Detected lcore 11 as core 11 on socket 0 00:05:10.268 EAL: Detected lcore 12 as core 12 on socket 0 00:05:10.268 EAL: Detected lcore 13 as core 13 on socket 0 00:05:10.268 EAL: Detected lcore 14 as core 14 on socket 0 00:05:10.268 EAL: Detected lcore 15 as core 15 on socket 0 00:05:10.268 EAL: Detected lcore 16 as core 16 on socket 0 00:05:10.268 EAL: Detected lcore 17 as core 17 on socket 0 00:05:10.268 EAL: Detected lcore 18 as core 18 on socket 0 00:05:10.268 EAL: Detected lcore 19 as core 19 on socket 0 00:05:10.268 EAL: Detected lcore 20 as core 20 on socket 0 00:05:10.268 EAL: Detected lcore 21 as core 21 on socket 0 00:05:10.268 EAL: Detected lcore 22 as core 22 on socket 0 00:05:10.268 EAL: Detected lcore 23 as core 23 on socket 0 00:05:10.268 EAL: Detected lcore 24 as core 24 on socket 0 00:05:10.268 EAL: Detected lcore 25 as core 25 on socket 0 00:05:10.268 EAL: Detected lcore 26 as core 26 on socket 0 00:05:10.268 EAL: Detected lcore 27 as core 27 on socket 0 00:05:10.268 EAL: Detected lcore 28 as core 28 on socket 0 00:05:10.268 EAL: Detected lcore 29 as core 29 on socket 0 00:05:10.268 EAL: Detected lcore 30 as core 30 on socket 0 00:05:10.268 EAL: Detected lcore 31 as core 31 on socket 0 00:05:10.268 EAL: Detected lcore 32 as core 32 on socket 0 00:05:10.268 EAL: Detected lcore 33 as core 33 on socket 0 00:05:10.268 EAL: Detected lcore 34 as core 34 on socket 0 00:05:10.268 EAL: Detected lcore 35 as core 35 on socket 0 00:05:10.268 EAL: Detected lcore 36 as core 0 on socket 1 00:05:10.269 EAL: Detected lcore 37 as core 1 on socket 1 00:05:10.269 EAL: Detected lcore 38 as core 2 on socket 1 00:05:10.269 EAL: Detected lcore 39 as core 3 on socket 1 00:05:10.269 EAL: Detected lcore 40 as core 4 on socket 1 00:05:10.269 EAL: Detected lcore 41 as core 5 on socket 1 00:05:10.269 EAL: Detected lcore 42 as core 6 on socket 1 00:05:10.269 EAL: Detected lcore 43 as core 7 on socket 1 00:05:10.269 EAL: Detected lcore 44 as core 8 on socket 1 00:05:10.269 EAL: Detected lcore 45 as core 9 on socket 1 00:05:10.269 EAL: Detected lcore 46 as core 10 on socket 1 00:05:10.269 EAL: Detected lcore 47 as core 11 on socket 1 00:05:10.269 EAL: Detected lcore 48 as core 12 on socket 1 00:05:10.269 EAL: Detected lcore 49 as core 13 on socket 1 00:05:10.269 EAL: Detected lcore 50 as core 14 on socket 1 00:05:10.269 EAL: Detected lcore 51 as core 15 on socket 1 00:05:10.269 EAL: Detected lcore 52 as core 16 on socket 1 00:05:10.269 EAL: Detected lcore 53 as core 17 on socket 1 00:05:10.269 EAL: Detected lcore 54 as core 18 on socket 1 00:05:10.269 EAL: Detected lcore 55 as core 19 on socket 1 00:05:10.269 EAL: Detected lcore 56 as core 20 on socket 1 00:05:10.269 EAL: Detected lcore 57 as core 21 on socket 1 00:05:10.269 EAL: Detected lcore 58 as core 22 on socket 1 00:05:10.269 EAL: Detected lcore 59 as core 23 on socket 1 00:05:10.269 EAL: Detected lcore 60 as core 24 on socket 1 00:05:10.269 EAL: Detected lcore 61 as core 25 on socket 1 00:05:10.269 EAL: Detected lcore 62 as core 26 on socket 1 00:05:10.269 EAL: Detected lcore 63 as core 27 on socket 1 00:05:10.269 EAL: Detected lcore 64 as core 28 on socket 1 00:05:10.269 EAL: Detected lcore 65 as core 29 on socket 1 00:05:10.269 EAL: Detected lcore 66 as core 30 on socket 1 00:05:10.269 EAL: Detected lcore 67 as core 31 on socket 1 00:05:10.269 EAL: Detected lcore 68 as core 32 on socket 1 00:05:10.269 EAL: Detected lcore 69 as core 33 on socket 1 00:05:10.269 EAL: Detected lcore 70 as core 34 on socket 1 00:05:10.269 EAL: Detected lcore 71 as core 35 on socket 1 00:05:10.269 EAL: Detected lcore 72 as core 0 on socket 0 00:05:10.269 EAL: Detected lcore 73 as core 1 on socket 0 00:05:10.269 EAL: Detected lcore 74 as core 2 on socket 0 00:05:10.269 EAL: Detected lcore 75 as core 3 on socket 0 00:05:10.269 EAL: Detected lcore 76 as core 4 on socket 0 00:05:10.269 EAL: Detected lcore 77 as core 5 on socket 0 00:05:10.269 EAL: Detected lcore 78 as core 6 on socket 0 00:05:10.269 EAL: Detected lcore 79 as core 7 on socket 0 00:05:10.269 EAL: Detected lcore 80 as core 8 on socket 0 00:05:10.269 EAL: Detected lcore 81 as core 9 on socket 0 00:05:10.269 EAL: Detected lcore 82 as core 10 on socket 0 00:05:10.269 EAL: Detected lcore 83 as core 11 on socket 0 00:05:10.269 EAL: Detected lcore 84 as core 12 on socket 0 00:05:10.269 EAL: Detected lcore 85 as core 13 on socket 0 00:05:10.269 EAL: Detected lcore 86 as core 14 on socket 0 00:05:10.269 EAL: Detected lcore 87 as core 15 on socket 0 00:05:10.269 EAL: Detected lcore 88 as core 16 on socket 0 00:05:10.269 EAL: Detected lcore 89 as core 17 on socket 0 00:05:10.269 EAL: Detected lcore 90 as core 18 on socket 0 00:05:10.269 EAL: Detected lcore 91 as core 19 on socket 0 00:05:10.269 EAL: Detected lcore 92 as core 20 on socket 0 00:05:10.269 EAL: Detected lcore 93 as core 21 on socket 0 00:05:10.269 EAL: Detected lcore 94 as core 22 on socket 0 00:05:10.269 EAL: Detected lcore 95 as core 23 on socket 0 00:05:10.269 EAL: Detected lcore 96 as core 24 on socket 0 00:05:10.269 EAL: Detected lcore 97 as core 25 on socket 0 00:05:10.269 EAL: Detected lcore 98 as core 26 on socket 0 00:05:10.269 EAL: Detected lcore 99 as core 27 on socket 0 00:05:10.269 EAL: Detected lcore 100 as core 28 on socket 0 00:05:10.269 EAL: Detected lcore 101 as core 29 on socket 0 00:05:10.269 EAL: Detected lcore 102 as core 30 on socket 0 00:05:10.269 EAL: Detected lcore 103 as core 31 on socket 0 00:05:10.269 EAL: Detected lcore 104 as core 32 on socket 0 00:05:10.269 EAL: Detected lcore 105 as core 33 on socket 0 00:05:10.269 EAL: Detected lcore 106 as core 34 on socket 0 00:05:10.269 EAL: Detected lcore 107 as core 35 on socket 0 00:05:10.269 EAL: Detected lcore 108 as core 0 on socket 1 00:05:10.269 EAL: Detected lcore 109 as core 1 on socket 1 00:05:10.269 EAL: Detected lcore 110 as core 2 on socket 1 00:05:10.269 EAL: Detected lcore 111 as core 3 on socket 1 00:05:10.269 EAL: Detected lcore 112 as core 4 on socket 1 00:05:10.269 EAL: Detected lcore 113 as core 5 on socket 1 00:05:10.269 EAL: Detected lcore 114 as core 6 on socket 1 00:05:10.269 EAL: Detected lcore 115 as core 7 on socket 1 00:05:10.269 EAL: Detected lcore 116 as core 8 on socket 1 00:05:10.269 EAL: Detected lcore 117 as core 9 on socket 1 00:05:10.269 EAL: Detected lcore 118 as core 10 on socket 1 00:05:10.269 EAL: Detected lcore 119 as core 11 on socket 1 00:05:10.269 EAL: Detected lcore 120 as core 12 on socket 1 00:05:10.269 EAL: Detected lcore 121 as core 13 on socket 1 00:05:10.269 EAL: Detected lcore 122 as core 14 on socket 1 00:05:10.269 EAL: Detected lcore 123 as core 15 on socket 1 00:05:10.269 EAL: Detected lcore 124 as core 16 on socket 1 00:05:10.269 EAL: Detected lcore 125 as core 17 on socket 1 00:05:10.269 EAL: Detected lcore 126 as core 18 on socket 1 00:05:10.269 EAL: Detected lcore 127 as core 19 on socket 1 00:05:10.269 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:10.269 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:10.269 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:10.269 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:10.269 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:10.269 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:10.269 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:10.269 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:10.269 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:10.269 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:10.269 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:10.269 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:10.269 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:10.269 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:10.269 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:10.269 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:10.269 EAL: Maximum logical cores by configuration: 128 00:05:10.269 EAL: Detected CPU lcores: 128 00:05:10.269 EAL: Detected NUMA nodes: 2 00:05:10.269 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.269 EAL: Detected shared linkage of DPDK 00:05:10.269 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.269 EAL: Bus pci wants IOVA as 'DC' 00:05:10.269 EAL: Buses did not request a specific IOVA mode. 00:05:10.269 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.269 EAL: Selected IOVA mode 'VA' 00:05:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.269 EAL: Probing VFIO support... 00:05:10.269 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.269 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.269 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.269 EAL: VFIO support initialized 00:05:10.269 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.269 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.269 EAL: Setting up physically contiguous memory... 00:05:10.269 EAL: Setting maximum number of open files to 524288 00:05:10.269 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.269 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.269 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.269 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.269 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.269 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.269 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.269 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.269 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.269 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.269 EAL: Hugepages will be freed exactly as allocated. 00:05:10.269 EAL: No shared files mode enabled, IPC is disabled 00:05:10.269 EAL: No shared files mode enabled, IPC is disabled 00:05:10.269 EAL: TSC frequency is ~2400000 KHz 00:05:10.269 EAL: Main lcore 0 is ready (tid=7f5f4fbc2a00;cpuset=[0]) 00:05:10.269 EAL: Trying to obtain current memory policy. 00:05:10.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.269 EAL: Restoring previous memory policy: 0 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.270 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.270 00:05:10.270 00:05:10.270 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.270 http://cunit.sourceforge.net/ 00:05:10.270 00:05:10.270 00:05:10.270 Suite: components_suite 00:05:10.270 Test: vtophys_malloc_test ...passed 00:05:10.270 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.270 EAL: Restoring previous memory policy: 4 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.270 EAL: Trying to obtain current memory policy. 00:05:10.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.270 EAL: Restoring previous memory policy: 4 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.270 EAL: Trying to obtain current memory policy. 00:05:10.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.270 EAL: Restoring previous memory policy: 4 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.270 EAL: Trying to obtain current memory policy. 00:05:10.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.270 EAL: Restoring previous memory policy: 4 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.270 EAL: request: mp_malloc_sync 00:05:10.270 EAL: No shared files mode enabled, IPC is disabled 00:05:10.270 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.270 EAL: Trying to obtain current memory policy. 00:05:10.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.541 EAL: Restoring previous memory policy: 4 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.541 EAL: Trying to obtain current memory policy. 00:05:10.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.541 EAL: Restoring previous memory policy: 4 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.541 EAL: Trying to obtain current memory policy. 00:05:10.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.541 EAL: Restoring previous memory policy: 4 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.541 EAL: Trying to obtain current memory policy. 00:05:10.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.541 EAL: Restoring previous memory policy: 4 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.541 EAL: Trying to obtain current memory policy. 00:05:10.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.541 EAL: Restoring previous memory policy: 4 00:05:10.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.541 EAL: request: mp_malloc_sync 00:05:10.541 EAL: No shared files mode enabled, IPC is disabled 00:05:10.541 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.838 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.838 EAL: request: mp_malloc_sync 00:05:10.838 EAL: No shared files mode enabled, IPC is disabled 00:05:10.838 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.838 EAL: Trying to obtain current memory policy. 00:05:10.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.839 EAL: Restoring previous memory policy: 4 00:05:10.839 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.839 EAL: request: mp_malloc_sync 00:05:10.839 EAL: No shared files mode enabled, IPC is disabled 00:05:10.839 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.107 EAL: request: mp_malloc_sync 00:05:11.107 EAL: No shared files mode enabled, IPC is disabled 00:05:11.107 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.107 passed 00:05:11.107 00:05:11.107 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.107 suites 1 1 n/a 0 0 00:05:11.107 tests 2 2 2 0 0 00:05:11.107 asserts 497 497 497 0 n/a 00:05:11.107 00:05:11.107 Elapsed time = 0.659 seconds 00:05:11.107 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.107 EAL: request: mp_malloc_sync 00:05:11.107 EAL: No shared files mode enabled, IPC is disabled 00:05:11.107 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.107 EAL: No shared files mode enabled, IPC is disabled 00:05:11.107 EAL: No shared files mode enabled, IPC is disabled 00:05:11.107 EAL: No shared files mode enabled, IPC is disabled 00:05:11.107 00:05:11.107 real 0m0.791s 00:05:11.107 user 0m0.411s 00:05:11.107 sys 0m0.350s 00:05:11.107 14:13:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:11.107 14:13:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.107 ************************************ 00:05:11.107 END TEST env_vtophys 00:05:11.107 ************************************ 00:05:11.107 14:13:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.107 14:13:48 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:11.107 14:13:48 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:11.107 14:13:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.107 ************************************ 00:05:11.107 START TEST env_pci 00:05:11.107 ************************************ 00:05:11.107 14:13:48 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.108 00:05:11.108 00:05:11.108 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.108 http://cunit.sourceforge.net/ 00:05:11.108 00:05:11.108 00:05:11.108 Suite: pci 00:05:11.108 Test: pci_hook ...[2024-06-10 14:13:48.626567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2809816 has claimed it 00:05:11.108 EAL: Cannot find device (10000:00:01.0) 00:05:11.108 EAL: Failed to attach device on primary process 00:05:11.108 passed 00:05:11.108 00:05:11.108 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.108 suites 1 1 n/a 0 0 00:05:11.108 tests 1 1 1 0 0 00:05:11.108 asserts 25 25 25 0 n/a 00:05:11.108 00:05:11.108 Elapsed time = 0.029 seconds 00:05:11.108 00:05:11.108 real 0m0.048s 00:05:11.108 user 0m0.012s 00:05:11.108 sys 0m0.036s 00:05:11.108 14:13:48 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:11.108 14:13:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.108 ************************************ 00:05:11.108 END TEST env_pci 00:05:11.108 ************************************ 00:05:11.108 14:13:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.108 14:13:48 env -- env/env.sh@15 -- # uname 00:05:11.368 14:13:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.368 14:13:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.368 14:13:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.368 14:13:48 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:11.368 14:13:48 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:11.368 14:13:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.368 ************************************ 00:05:11.368 START TEST env_dpdk_post_init 00:05:11.368 ************************************ 00:05:11.368 14:13:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.368 EAL: Detected CPU lcores: 128 00:05:11.368 EAL: Detected NUMA nodes: 2 00:05:11.368 EAL: Detected shared linkage of DPDK 00:05:11.368 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.368 EAL: Selected IOVA mode 'VA' 00:05:11.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.368 EAL: VFIO support initialized 00:05:11.368 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.368 EAL: Using IOMMU type 1 (Type 1) 00:05:11.628 EAL: Ignore mapping IO port bar(1) 00:05:11.628 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:11.889 EAL: Ignore mapping IO port bar(1) 00:05:11.889 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:11.889 EAL: Ignore mapping IO port bar(1) 00:05:12.149 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:12.149 EAL: Ignore mapping IO port bar(1) 00:05:12.410 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:12.410 EAL: Ignore mapping IO port bar(1) 00:05:12.410 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:12.671 EAL: Ignore mapping IO port bar(1) 00:05:12.671 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:12.930 EAL: Ignore mapping IO port bar(1) 00:05:12.930 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:13.189 EAL: Ignore mapping IO port bar(1) 00:05:13.189 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:13.448 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:13.449 EAL: Ignore mapping IO port bar(1) 00:05:13.708 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:13.708 EAL: Ignore mapping IO port bar(1) 00:05:13.968 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:13.968 EAL: Ignore mapping IO port bar(1) 00:05:14.228 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:14.228 EAL: Ignore mapping IO port bar(1) 00:05:14.228 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:14.489 EAL: Ignore mapping IO port bar(1) 00:05:14.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:14.751 EAL: Ignore mapping IO port bar(1) 00:05:14.751 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:15.012 EAL: Ignore mapping IO port bar(1) 00:05:15.012 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:15.012 EAL: Ignore mapping IO port bar(1) 00:05:15.271 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:15.271 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:15.271 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:15.271 Starting DPDK initialization... 00:05:15.271 Starting SPDK post initialization... 00:05:15.271 SPDK NVMe probe 00:05:15.271 Attaching to 0000:65:00.0 00:05:15.271 Attached to 0000:65:00.0 00:05:15.271 Cleaning up... 00:05:17.183 00:05:17.183 real 0m5.735s 00:05:17.183 user 0m0.191s 00:05:17.183 sys 0m0.092s 00:05:17.183 14:13:54 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.183 14:13:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.183 ************************************ 00:05:17.183 END TEST env_dpdk_post_init 00:05:17.183 ************************************ 00:05:17.183 14:13:54 env -- env/env.sh@26 -- # uname 00:05:17.183 14:13:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.183 14:13:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.183 14:13:54 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.183 14:13:54 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.183 14:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.183 ************************************ 00:05:17.183 START TEST env_mem_callbacks 00:05:17.183 ************************************ 00:05:17.183 14:13:54 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.183 EAL: Detected CPU lcores: 128 00:05:17.183 EAL: Detected NUMA nodes: 2 00:05:17.183 EAL: Detected shared linkage of DPDK 00:05:17.183 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.183 EAL: Selected IOVA mode 'VA' 00:05:17.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.183 EAL: VFIO support initialized 00:05:17.183 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.183 00:05:17.183 00:05:17.183 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.183 http://cunit.sourceforge.net/ 00:05:17.183 00:05:17.183 00:05:17.183 Suite: memory 00:05:17.183 Test: test ... 00:05:17.183 register 0x200000200000 2097152 00:05:17.183 malloc 3145728 00:05:17.183 register 0x200000400000 4194304 00:05:17.183 buf 0x200000500000 len 3145728 PASSED 00:05:17.183 malloc 64 00:05:17.183 buf 0x2000004fff40 len 64 PASSED 00:05:17.183 malloc 4194304 00:05:17.183 register 0x200000800000 6291456 00:05:17.183 buf 0x200000a00000 len 4194304 PASSED 00:05:17.183 free 0x200000500000 3145728 00:05:17.183 free 0x2000004fff40 64 00:05:17.183 unregister 0x200000400000 4194304 PASSED 00:05:17.183 free 0x200000a00000 4194304 00:05:17.183 unregister 0x200000800000 6291456 PASSED 00:05:17.183 malloc 8388608 00:05:17.183 register 0x200000400000 10485760 00:05:17.183 buf 0x200000600000 len 8388608 PASSED 00:05:17.184 free 0x200000600000 8388608 00:05:17.184 unregister 0x200000400000 10485760 PASSED 00:05:17.184 passed 00:05:17.184 00:05:17.184 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.184 suites 1 1 n/a 0 0 00:05:17.184 tests 1 1 1 0 0 00:05:17.184 asserts 15 15 15 0 n/a 00:05:17.184 00:05:17.184 Elapsed time = 0.010 seconds 00:05:17.184 00:05:17.184 real 0m0.064s 00:05:17.184 user 0m0.019s 00:05:17.184 sys 0m0.044s 00:05:17.184 14:13:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.184 14:13:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 ************************************ 00:05:17.184 END TEST env_mem_callbacks 00:05:17.184 ************************************ 00:05:17.184 00:05:17.184 real 0m7.231s 00:05:17.184 user 0m0.950s 00:05:17.184 sys 0m0.825s 00:05:17.184 14:13:54 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.184 14:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 ************************************ 00:05:17.184 END TEST env 00:05:17.184 ************************************ 00:05:17.184 14:13:54 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.184 14:13:54 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.184 14:13:54 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.184 14:13:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 ************************************ 00:05:17.184 START TEST rpc 00:05:17.184 ************************************ 00:05:17.184 14:13:54 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.444 * Looking for test storage... 00:05:17.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.445 14:13:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2811061 00:05:17.445 14:13:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.445 14:13:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:17.445 14:13:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2811061 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@830 -- # '[' -z 2811061 ']' 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:17.445 14:13:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.445 [2024-06-10 14:13:54.880518] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:17.445 [2024-06-10 14:13:54.880580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811061 ] 00:05:17.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.445 [2024-06-10 14:13:54.962192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.705 [2024-06-10 14:13:55.058918] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.705 [2024-06-10 14:13:55.058977] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2811061' to capture a snapshot of events at runtime. 00:05:17.705 [2024-06-10 14:13:55.058985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.705 [2024-06-10 14:13:55.058992] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.705 [2024-06-10 14:13:55.058998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2811061 for offline analysis/debug. 00:05:17.705 [2024-06-10 14:13:55.059026] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.279 14:13:55 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.279 14:13:55 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:18.279 14:13:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.279 14:13:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.279 14:13:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.279 14:13:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.279 14:13:55 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.279 14:13:55 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.279 14:13:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.279 ************************************ 00:05:18.279 START TEST rpc_integrity 00:05:18.279 ************************************ 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.279 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.279 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.540 { 00:05:18.540 "name": "Malloc0", 00:05:18.540 "aliases": [ 00:05:18.540 "5c19b523-f210-4744-964e-ca2f493fc47d" 00:05:18.540 ], 00:05:18.540 "product_name": "Malloc disk", 00:05:18.540 "block_size": 512, 00:05:18.540 "num_blocks": 16384, 00:05:18.540 "uuid": "5c19b523-f210-4744-964e-ca2f493fc47d", 00:05:18.540 "assigned_rate_limits": { 00:05:18.540 "rw_ios_per_sec": 0, 00:05:18.540 "rw_mbytes_per_sec": 0, 00:05:18.540 "r_mbytes_per_sec": 0, 00:05:18.540 "w_mbytes_per_sec": 0 00:05:18.540 }, 00:05:18.540 "claimed": false, 00:05:18.540 "zoned": false, 00:05:18.540 "supported_io_types": { 00:05:18.540 "read": true, 00:05:18.540 "write": true, 00:05:18.540 "unmap": true, 00:05:18.540 "write_zeroes": true, 00:05:18.540 "flush": true, 00:05:18.540 "reset": true, 00:05:18.540 "compare": false, 00:05:18.540 "compare_and_write": false, 00:05:18.540 "abort": true, 00:05:18.540 "nvme_admin": false, 00:05:18.540 "nvme_io": false 00:05:18.540 }, 00:05:18.540 "memory_domains": [ 00:05:18.540 { 00:05:18.540 "dma_device_id": "system", 00:05:18.540 "dma_device_type": 1 00:05:18.540 }, 00:05:18.540 { 00:05:18.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.540 "dma_device_type": 2 00:05:18.540 } 00:05:18.540 ], 00:05:18.540 "driver_specific": {} 00:05:18.540 } 00:05:18.540 ]' 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.540 [2024-06-10 14:13:55.938888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.540 [2024-06-10 14:13:55.938934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.540 [2024-06-10 14:13:55.938949] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16e2be0 00:05:18.540 [2024-06-10 14:13:55.938957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.540 [2024-06-10 14:13:55.940529] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.540 [2024-06-10 14:13:55.940564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.540 Passthru0 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.540 14:13:55 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.540 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.540 { 00:05:18.540 "name": "Malloc0", 00:05:18.540 "aliases": [ 00:05:18.540 "5c19b523-f210-4744-964e-ca2f493fc47d" 00:05:18.541 ], 00:05:18.541 "product_name": "Malloc disk", 00:05:18.541 "block_size": 512, 00:05:18.541 "num_blocks": 16384, 00:05:18.541 "uuid": "5c19b523-f210-4744-964e-ca2f493fc47d", 00:05:18.541 "assigned_rate_limits": { 00:05:18.541 "rw_ios_per_sec": 0, 00:05:18.541 "rw_mbytes_per_sec": 0, 00:05:18.541 "r_mbytes_per_sec": 0, 00:05:18.541 "w_mbytes_per_sec": 0 00:05:18.541 }, 00:05:18.541 "claimed": true, 00:05:18.541 "claim_type": "exclusive_write", 00:05:18.541 "zoned": false, 00:05:18.541 "supported_io_types": { 00:05:18.541 "read": true, 00:05:18.541 "write": true, 00:05:18.541 "unmap": true, 00:05:18.541 "write_zeroes": true, 00:05:18.541 "flush": true, 00:05:18.541 "reset": true, 00:05:18.541 "compare": false, 00:05:18.541 "compare_and_write": false, 00:05:18.541 "abort": true, 00:05:18.541 "nvme_admin": false, 00:05:18.541 "nvme_io": false 00:05:18.541 }, 00:05:18.541 "memory_domains": [ 00:05:18.541 { 00:05:18.541 "dma_device_id": "system", 00:05:18.541 "dma_device_type": 1 00:05:18.541 }, 00:05:18.541 { 00:05:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.541 "dma_device_type": 2 00:05:18.541 } 00:05:18.541 ], 00:05:18.541 "driver_specific": {} 00:05:18.541 }, 00:05:18.541 { 00:05:18.541 "name": "Passthru0", 00:05:18.541 "aliases": [ 00:05:18.541 "896e0032-20df-5ba7-9325-0dfc7319dd5e" 00:05:18.541 ], 00:05:18.541 "product_name": "passthru", 00:05:18.541 "block_size": 512, 00:05:18.541 "num_blocks": 16384, 00:05:18.541 "uuid": "896e0032-20df-5ba7-9325-0dfc7319dd5e", 00:05:18.541 "assigned_rate_limits": { 00:05:18.541 "rw_ios_per_sec": 0, 00:05:18.541 "rw_mbytes_per_sec": 0, 00:05:18.541 "r_mbytes_per_sec": 0, 00:05:18.541 "w_mbytes_per_sec": 0 00:05:18.541 }, 00:05:18.541 "claimed": false, 00:05:18.541 "zoned": false, 00:05:18.541 "supported_io_types": { 00:05:18.541 "read": true, 00:05:18.541 "write": true, 00:05:18.541 "unmap": true, 00:05:18.541 "write_zeroes": true, 00:05:18.541 "flush": true, 00:05:18.541 "reset": true, 00:05:18.541 "compare": false, 00:05:18.541 "compare_and_write": false, 00:05:18.541 "abort": true, 00:05:18.541 "nvme_admin": false, 00:05:18.541 "nvme_io": false 00:05:18.541 }, 00:05:18.541 "memory_domains": [ 00:05:18.541 { 00:05:18.541 "dma_device_id": "system", 00:05:18.541 "dma_device_type": 1 00:05:18.541 }, 00:05:18.541 { 00:05:18.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.541 "dma_device_type": 2 00:05:18.541 } 00:05:18.541 ], 00:05:18.541 "driver_specific": { 00:05:18.541 "passthru": { 00:05:18.541 "name": "Passthru0", 00:05:18.541 "base_bdev_name": "Malloc0" 00:05:18.541 } 00:05:18.541 } 00:05:18.541 } 00:05:18.541 ]' 00:05:18.541 14:13:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.541 14:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.541 00:05:18.541 real 0m0.298s 00:05:18.541 user 0m0.188s 00:05:18.541 sys 0m0.037s 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.541 14:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.541 ************************************ 00:05:18.541 END TEST rpc_integrity 00:05:18.541 ************************************ 00:05:18.541 14:13:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.541 14:13:56 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.803 14:13:56 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.803 14:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 ************************************ 00:05:18.803 START TEST rpc_plugins 00:05:18.803 ************************************ 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:18.803 { 00:05:18.803 "name": "Malloc1", 00:05:18.803 "aliases": [ 00:05:18.803 "dc392449-6c72-4fa8-bd2b-060155ae828e" 00:05:18.803 ], 00:05:18.803 "product_name": "Malloc disk", 00:05:18.803 "block_size": 4096, 00:05:18.803 "num_blocks": 256, 00:05:18.803 "uuid": "dc392449-6c72-4fa8-bd2b-060155ae828e", 00:05:18.803 "assigned_rate_limits": { 00:05:18.803 "rw_ios_per_sec": 0, 00:05:18.803 "rw_mbytes_per_sec": 0, 00:05:18.803 "r_mbytes_per_sec": 0, 00:05:18.803 "w_mbytes_per_sec": 0 00:05:18.803 }, 00:05:18.803 "claimed": false, 00:05:18.803 "zoned": false, 00:05:18.803 "supported_io_types": { 00:05:18.803 "read": true, 00:05:18.803 "write": true, 00:05:18.803 "unmap": true, 00:05:18.803 "write_zeroes": true, 00:05:18.803 "flush": true, 00:05:18.803 "reset": true, 00:05:18.803 "compare": false, 00:05:18.803 "compare_and_write": false, 00:05:18.803 "abort": true, 00:05:18.803 "nvme_admin": false, 00:05:18.803 "nvme_io": false 00:05:18.803 }, 00:05:18.803 "memory_domains": [ 00:05:18.803 { 00:05:18.803 "dma_device_id": "system", 00:05:18.803 "dma_device_type": 1 00:05:18.803 }, 00:05:18.803 { 00:05:18.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.803 "dma_device_type": 2 00:05:18.803 } 00:05:18.803 ], 00:05:18.803 "driver_specific": {} 00:05:18.803 } 00:05:18.803 ]' 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:18.803 14:13:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:18.803 00:05:18.803 real 0m0.143s 00:05:18.803 user 0m0.094s 00:05:18.803 sys 0m0.017s 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.803 14:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 ************************************ 00:05:18.803 END TEST rpc_plugins 00:05:18.803 ************************************ 00:05:18.803 14:13:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:18.803 14:13:56 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.803 14:13:56 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.803 14:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.803 ************************************ 00:05:18.803 START TEST rpc_trace_cmd_test 00:05:18.803 ************************************ 00:05:18.803 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:18.803 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:19.065 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2811061", 00:05:19.065 "tpoint_group_mask": "0x8", 00:05:19.065 "iscsi_conn": { 00:05:19.065 "mask": "0x2", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "scsi": { 00:05:19.065 "mask": "0x4", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "bdev": { 00:05:19.065 "mask": "0x8", 00:05:19.065 "tpoint_mask": "0xffffffffffffffff" 00:05:19.065 }, 00:05:19.065 "nvmf_rdma": { 00:05:19.065 "mask": "0x10", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "nvmf_tcp": { 00:05:19.065 "mask": "0x20", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "ftl": { 00:05:19.065 "mask": "0x40", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "blobfs": { 00:05:19.065 "mask": "0x80", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "dsa": { 00:05:19.065 "mask": "0x200", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "thread": { 00:05:19.065 "mask": "0x400", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "nvme_pcie": { 00:05:19.065 "mask": "0x800", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "iaa": { 00:05:19.065 "mask": "0x1000", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "nvme_tcp": { 00:05:19.065 "mask": "0x2000", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "bdev_nvme": { 00:05:19.065 "mask": "0x4000", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 }, 00:05:19.065 "sock": { 00:05:19.065 "mask": "0x8000", 00:05:19.065 "tpoint_mask": "0x0" 00:05:19.065 } 00:05:19.065 }' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:19.065 00:05:19.065 real 0m0.250s 00:05:19.065 user 0m0.212s 00:05:19.065 sys 0m0.029s 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.065 14:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.065 ************************************ 00:05:19.065 END TEST rpc_trace_cmd_test 00:05:19.065 ************************************ 00:05:19.327 14:13:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:19.327 14:13:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.327 14:13:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.327 14:13:56 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.327 14:13:56 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.327 14:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 ************************************ 00:05:19.327 START TEST rpc_daemon_integrity 00:05:19.327 ************************************ 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.327 { 00:05:19.327 "name": "Malloc2", 00:05:19.327 "aliases": [ 00:05:19.327 "6b4199c6-2308-478d-8240-a3b6b83f37b4" 00:05:19.327 ], 00:05:19.327 "product_name": "Malloc disk", 00:05:19.327 "block_size": 512, 00:05:19.327 "num_blocks": 16384, 00:05:19.327 "uuid": "6b4199c6-2308-478d-8240-a3b6b83f37b4", 00:05:19.327 "assigned_rate_limits": { 00:05:19.327 "rw_ios_per_sec": 0, 00:05:19.327 "rw_mbytes_per_sec": 0, 00:05:19.327 "r_mbytes_per_sec": 0, 00:05:19.327 "w_mbytes_per_sec": 0 00:05:19.327 }, 00:05:19.327 "claimed": false, 00:05:19.327 "zoned": false, 00:05:19.327 "supported_io_types": { 00:05:19.327 "read": true, 00:05:19.327 "write": true, 00:05:19.327 "unmap": true, 00:05:19.327 "write_zeroes": true, 00:05:19.327 "flush": true, 00:05:19.327 "reset": true, 00:05:19.327 "compare": false, 00:05:19.327 "compare_and_write": false, 00:05:19.327 "abort": true, 00:05:19.327 "nvme_admin": false, 00:05:19.327 "nvme_io": false 00:05:19.327 }, 00:05:19.327 "memory_domains": [ 00:05:19.327 { 00:05:19.327 "dma_device_id": "system", 00:05:19.327 "dma_device_type": 1 00:05:19.327 }, 00:05:19.327 { 00:05:19.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.327 "dma_device_type": 2 00:05:19.327 } 00:05:19.327 ], 00:05:19.327 "driver_specific": {} 00:05:19.327 } 00:05:19.327 ]' 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 [2024-06-10 14:13:56.873637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:19.327 [2024-06-10 14:13:56.873684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.327 [2024-06-10 14:13:56.873702] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16da4b0 00:05:19.327 [2024-06-10 14:13:56.873710] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.327 [2024-06-10 14:13:56.875111] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.327 [2024-06-10 14:13:56.875144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.327 Passthru0 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.327 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.327 { 00:05:19.327 "name": "Malloc2", 00:05:19.327 "aliases": [ 00:05:19.327 "6b4199c6-2308-478d-8240-a3b6b83f37b4" 00:05:19.327 ], 00:05:19.328 "product_name": "Malloc disk", 00:05:19.328 "block_size": 512, 00:05:19.328 "num_blocks": 16384, 00:05:19.328 "uuid": "6b4199c6-2308-478d-8240-a3b6b83f37b4", 00:05:19.328 "assigned_rate_limits": { 00:05:19.328 "rw_ios_per_sec": 0, 00:05:19.328 "rw_mbytes_per_sec": 0, 00:05:19.328 "r_mbytes_per_sec": 0, 00:05:19.328 "w_mbytes_per_sec": 0 00:05:19.328 }, 00:05:19.328 "claimed": true, 00:05:19.328 "claim_type": "exclusive_write", 00:05:19.328 "zoned": false, 00:05:19.328 "supported_io_types": { 00:05:19.328 "read": true, 00:05:19.328 "write": true, 00:05:19.328 "unmap": true, 00:05:19.328 "write_zeroes": true, 00:05:19.328 "flush": true, 00:05:19.328 "reset": true, 00:05:19.328 "compare": false, 00:05:19.328 "compare_and_write": false, 00:05:19.328 "abort": true, 00:05:19.328 "nvme_admin": false, 00:05:19.328 "nvme_io": false 00:05:19.328 }, 00:05:19.328 "memory_domains": [ 00:05:19.328 { 00:05:19.328 "dma_device_id": "system", 00:05:19.328 "dma_device_type": 1 00:05:19.328 }, 00:05:19.328 { 00:05:19.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.328 "dma_device_type": 2 00:05:19.328 } 00:05:19.328 ], 00:05:19.328 "driver_specific": {} 00:05:19.328 }, 00:05:19.328 { 00:05:19.328 "name": "Passthru0", 00:05:19.328 "aliases": [ 00:05:19.328 "bc4519a3-a0c7-5858-9bc2-6105da927955" 00:05:19.328 ], 00:05:19.328 "product_name": "passthru", 00:05:19.328 "block_size": 512, 00:05:19.328 "num_blocks": 16384, 00:05:19.328 "uuid": "bc4519a3-a0c7-5858-9bc2-6105da927955", 00:05:19.328 "assigned_rate_limits": { 00:05:19.328 "rw_ios_per_sec": 0, 00:05:19.328 "rw_mbytes_per_sec": 0, 00:05:19.328 "r_mbytes_per_sec": 0, 00:05:19.328 "w_mbytes_per_sec": 0 00:05:19.328 }, 00:05:19.328 "claimed": false, 00:05:19.328 "zoned": false, 00:05:19.328 "supported_io_types": { 00:05:19.328 "read": true, 00:05:19.328 "write": true, 00:05:19.328 "unmap": true, 00:05:19.328 "write_zeroes": true, 00:05:19.328 "flush": true, 00:05:19.328 "reset": true, 00:05:19.328 "compare": false, 00:05:19.328 "compare_and_write": false, 00:05:19.328 "abort": true, 00:05:19.328 "nvme_admin": false, 00:05:19.328 "nvme_io": false 00:05:19.328 }, 00:05:19.328 "memory_domains": [ 00:05:19.328 { 00:05:19.328 "dma_device_id": "system", 00:05:19.328 "dma_device_type": 1 00:05:19.328 }, 00:05:19.328 { 00:05:19.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.328 "dma_device_type": 2 00:05:19.328 } 00:05:19.328 ], 00:05:19.328 "driver_specific": { 00:05:19.328 "passthru": { 00:05:19.328 "name": "Passthru0", 00:05:19.328 "base_bdev_name": "Malloc2" 00:05:19.328 } 00:05:19.328 } 00:05:19.328 } 00:05:19.328 ]' 00:05:19.328 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.590 14:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.590 14:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.590 00:05:19.590 real 0m0.295s 00:05:19.590 user 0m0.193s 00:05:19.590 sys 0m0.031s 00:05:19.590 14:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.590 14:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.590 ************************************ 00:05:19.590 END TEST rpc_daemon_integrity 00:05:19.590 ************************************ 00:05:19.590 14:13:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:19.590 14:13:57 rpc -- rpc/rpc.sh@84 -- # killprocess 2811061 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@949 -- # '[' -z 2811061 ']' 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@953 -- # kill -0 2811061 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@954 -- # uname 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2811061 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2811061' 00:05:19.590 killing process with pid 2811061 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@968 -- # kill 2811061 00:05:19.590 14:13:57 rpc -- common/autotest_common.sh@973 -- # wait 2811061 00:05:19.851 00:05:19.851 real 0m2.637s 00:05:19.851 user 0m3.444s 00:05:19.851 sys 0m0.782s 00:05:19.851 14:13:57 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:19.851 14:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.851 ************************************ 00:05:19.851 END TEST rpc 00:05:19.851 ************************************ 00:05:19.851 14:13:57 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:19.851 14:13:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:19.851 14:13:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:19.851 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.851 ************************************ 00:05:19.851 START TEST skip_rpc 00:05:19.851 ************************************ 00:05:19.851 14:13:57 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:20.113 * Looking for test storage... 00:05:20.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.113 14:13:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.113 14:13:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:20.113 14:13:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.113 14:13:57 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.113 14:13:57 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.113 14:13:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.113 ************************************ 00:05:20.113 START TEST skip_rpc 00:05:20.113 ************************************ 00:05:20.114 14:13:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:20.114 14:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2811900 00:05:20.114 14:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.114 14:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.114 14:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.114 [2024-06-10 14:13:57.620450] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:20.114 [2024-06-10 14:13:57.620510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811900 ] 00:05:20.114 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.114 [2024-06-10 14:13:57.698766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.375 [2024-06-10 14:13:57.793252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2811900 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 2811900 ']' 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 2811900 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2811900 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2811900' 00:05:25.666 killing process with pid 2811900 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 2811900 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 2811900 00:05:25.666 00:05:25.666 real 0m5.274s 00:05:25.666 user 0m5.021s 00:05:25.666 sys 0m0.286s 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:25.666 14:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.666 ************************************ 00:05:25.666 END TEST skip_rpc 00:05:25.666 ************************************ 00:05:25.666 14:14:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.666 14:14:02 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:25.666 14:14:02 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:25.666 14:14:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.666 ************************************ 00:05:25.666 START TEST skip_rpc_with_json 00:05:25.666 ************************************ 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2812950 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2812950 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 2812950 ']' 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:25.666 14:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.666 [2024-06-10 14:14:02.958373] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:25.666 [2024-06-10 14:14:02.958426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812950 ] 00:05:25.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.666 [2024-06-10 14:14:03.033816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.666 [2024-06-10 14:14:03.105395] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.237 [2024-06-10 14:14:03.820221] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.237 request: 00:05:26.237 { 00:05:26.237 "trtype": "tcp", 00:05:26.237 "method": "nvmf_get_transports", 00:05:26.237 "req_id": 1 00:05:26.237 } 00:05:26.237 Got JSON-RPC error response 00:05:26.237 response: 00:05:26.237 { 00:05:26.237 "code": -19, 00:05:26.237 "message": "No such device" 00:05:26.237 } 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.237 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.237 [2024-06-10 14:14:03.828330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.497 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.497 { 00:05:26.497 "subsystems": [ 00:05:26.497 { 00:05:26.497 "subsystem": "vfio_user_target", 00:05:26.497 "config": null 00:05:26.497 }, 00:05:26.497 { 00:05:26.497 "subsystem": "keyring", 00:05:26.497 "config": [] 00:05:26.497 }, 00:05:26.497 { 00:05:26.497 "subsystem": "iobuf", 00:05:26.497 "config": [ 00:05:26.497 { 00:05:26.497 "method": "iobuf_set_options", 00:05:26.497 "params": { 00:05:26.497 "small_pool_count": 8192, 00:05:26.497 "large_pool_count": 1024, 00:05:26.497 "small_bufsize": 8192, 00:05:26.497 "large_bufsize": 135168 00:05:26.497 } 00:05:26.497 } 00:05:26.497 ] 00:05:26.497 }, 00:05:26.497 { 00:05:26.497 "subsystem": "sock", 00:05:26.497 "config": [ 00:05:26.497 { 00:05:26.497 "method": "sock_set_default_impl", 00:05:26.497 "params": { 00:05:26.497 "impl_name": "posix" 00:05:26.497 } 00:05:26.497 }, 00:05:26.497 { 00:05:26.497 "method": "sock_impl_set_options", 00:05:26.497 "params": { 00:05:26.497 "impl_name": "ssl", 00:05:26.497 "recv_buf_size": 4096, 00:05:26.497 "send_buf_size": 4096, 00:05:26.497 "enable_recv_pipe": true, 00:05:26.497 "enable_quickack": false, 00:05:26.497 "enable_placement_id": 0, 00:05:26.497 "enable_zerocopy_send_server": true, 00:05:26.497 "enable_zerocopy_send_client": false, 00:05:26.497 "zerocopy_threshold": 0, 00:05:26.497 "tls_version": 0, 00:05:26.497 "enable_ktls": false 00:05:26.497 } 00:05:26.497 }, 00:05:26.497 { 00:05:26.497 "method": "sock_impl_set_options", 00:05:26.497 "params": { 00:05:26.497 "impl_name": "posix", 00:05:26.497 "recv_buf_size": 2097152, 00:05:26.497 "send_buf_size": 2097152, 00:05:26.497 "enable_recv_pipe": true, 00:05:26.497 "enable_quickack": false, 00:05:26.497 "enable_placement_id": 0, 00:05:26.497 "enable_zerocopy_send_server": true, 00:05:26.497 "enable_zerocopy_send_client": false, 00:05:26.497 "zerocopy_threshold": 0, 00:05:26.497 "tls_version": 0, 00:05:26.497 "enable_ktls": false 00:05:26.497 } 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "vmd", 00:05:26.498 "config": [] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "accel", 00:05:26.498 "config": [ 00:05:26.498 { 00:05:26.498 "method": "accel_set_options", 00:05:26.498 "params": { 00:05:26.498 "small_cache_size": 128, 00:05:26.498 "large_cache_size": 16, 00:05:26.498 "task_count": 2048, 00:05:26.498 "sequence_count": 2048, 00:05:26.498 "buf_count": 2048 00:05:26.498 } 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "bdev", 00:05:26.498 "config": [ 00:05:26.498 { 00:05:26.498 "method": "bdev_set_options", 00:05:26.498 "params": { 00:05:26.498 "bdev_io_pool_size": 65535, 00:05:26.498 "bdev_io_cache_size": 256, 00:05:26.498 "bdev_auto_examine": true, 00:05:26.498 "iobuf_small_cache_size": 128, 00:05:26.498 "iobuf_large_cache_size": 16 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "bdev_raid_set_options", 00:05:26.498 "params": { 00:05:26.498 "process_window_size_kb": 1024 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "bdev_iscsi_set_options", 00:05:26.498 "params": { 00:05:26.498 "timeout_sec": 30 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "bdev_nvme_set_options", 00:05:26.498 "params": { 00:05:26.498 "action_on_timeout": "none", 00:05:26.498 "timeout_us": 0, 00:05:26.498 "timeout_admin_us": 0, 00:05:26.498 "keep_alive_timeout_ms": 10000, 00:05:26.498 "arbitration_burst": 0, 00:05:26.498 "low_priority_weight": 0, 00:05:26.498 "medium_priority_weight": 0, 00:05:26.498 "high_priority_weight": 0, 00:05:26.498 "nvme_adminq_poll_period_us": 10000, 00:05:26.498 "nvme_ioq_poll_period_us": 0, 00:05:26.498 "io_queue_requests": 0, 00:05:26.498 "delay_cmd_submit": true, 00:05:26.498 "transport_retry_count": 4, 00:05:26.498 "bdev_retry_count": 3, 00:05:26.498 "transport_ack_timeout": 0, 00:05:26.498 "ctrlr_loss_timeout_sec": 0, 00:05:26.498 "reconnect_delay_sec": 0, 00:05:26.498 "fast_io_fail_timeout_sec": 0, 00:05:26.498 "disable_auto_failback": false, 00:05:26.498 "generate_uuids": false, 00:05:26.498 "transport_tos": 0, 00:05:26.498 "nvme_error_stat": false, 00:05:26.498 "rdma_srq_size": 0, 00:05:26.498 "io_path_stat": false, 00:05:26.498 "allow_accel_sequence": false, 00:05:26.498 "rdma_max_cq_size": 0, 00:05:26.498 "rdma_cm_event_timeout_ms": 0, 00:05:26.498 "dhchap_digests": [ 00:05:26.498 "sha256", 00:05:26.498 "sha384", 00:05:26.498 "sha512" 00:05:26.498 ], 00:05:26.498 "dhchap_dhgroups": [ 00:05:26.498 "null", 00:05:26.498 "ffdhe2048", 00:05:26.498 "ffdhe3072", 00:05:26.498 "ffdhe4096", 00:05:26.498 "ffdhe6144", 00:05:26.498 "ffdhe8192" 00:05:26.498 ] 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "bdev_nvme_set_hotplug", 00:05:26.498 "params": { 00:05:26.498 "period_us": 100000, 00:05:26.498 "enable": false 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "bdev_wait_for_examine" 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "scsi", 00:05:26.498 "config": null 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "scheduler", 00:05:26.498 "config": [ 00:05:26.498 { 00:05:26.498 "method": "framework_set_scheduler", 00:05:26.498 "params": { 00:05:26.498 "name": "static" 00:05:26.498 } 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "vhost_scsi", 00:05:26.498 "config": [] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "vhost_blk", 00:05:26.498 "config": [] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "ublk", 00:05:26.498 "config": [] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "nbd", 00:05:26.498 "config": [] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "nvmf", 00:05:26.498 "config": [ 00:05:26.498 { 00:05:26.498 "method": "nvmf_set_config", 00:05:26.498 "params": { 00:05:26.498 "discovery_filter": "match_any", 00:05:26.498 "admin_cmd_passthru": { 00:05:26.498 "identify_ctrlr": false 00:05:26.498 } 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "nvmf_set_max_subsystems", 00:05:26.498 "params": { 00:05:26.498 "max_subsystems": 1024 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "nvmf_set_crdt", 00:05:26.498 "params": { 00:05:26.498 "crdt1": 0, 00:05:26.498 "crdt2": 0, 00:05:26.498 "crdt3": 0 00:05:26.498 } 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "method": "nvmf_create_transport", 00:05:26.498 "params": { 00:05:26.498 "trtype": "TCP", 00:05:26.498 "max_queue_depth": 128, 00:05:26.498 "max_io_qpairs_per_ctrlr": 127, 00:05:26.498 "in_capsule_data_size": 4096, 00:05:26.498 "max_io_size": 131072, 00:05:26.498 "io_unit_size": 131072, 00:05:26.498 "max_aq_depth": 128, 00:05:26.498 "num_shared_buffers": 511, 00:05:26.498 "buf_cache_size": 4294967295, 00:05:26.498 "dif_insert_or_strip": false, 00:05:26.498 "zcopy": false, 00:05:26.498 "c2h_success": true, 00:05:26.498 "sock_priority": 0, 00:05:26.498 "abort_timeout_sec": 1, 00:05:26.498 "ack_timeout": 0, 00:05:26.498 "data_wr_pool_size": 0 00:05:26.498 } 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 }, 00:05:26.498 { 00:05:26.498 "subsystem": "iscsi", 00:05:26.498 "config": [ 00:05:26.498 { 00:05:26.498 "method": "iscsi_set_options", 00:05:26.498 "params": { 00:05:26.498 "node_base": "iqn.2016-06.io.spdk", 00:05:26.498 "max_sessions": 128, 00:05:26.498 "max_connections_per_session": 2, 00:05:26.498 "max_queue_depth": 64, 00:05:26.498 "default_time2wait": 2, 00:05:26.498 "default_time2retain": 20, 00:05:26.498 "first_burst_length": 8192, 00:05:26.498 "immediate_data": true, 00:05:26.498 "allow_duplicated_isid": false, 00:05:26.498 "error_recovery_level": 0, 00:05:26.498 "nop_timeout": 60, 00:05:26.498 "nop_in_interval": 30, 00:05:26.498 "disable_chap": false, 00:05:26.498 "require_chap": false, 00:05:26.498 "mutual_chap": false, 00:05:26.498 "chap_group": 0, 00:05:26.498 "max_large_datain_per_connection": 64, 00:05:26.498 "max_r2t_per_connection": 4, 00:05:26.498 "pdu_pool_size": 36864, 00:05:26.498 "immediate_data_pool_size": 16384, 00:05:26.498 "data_out_pool_size": 2048 00:05:26.498 } 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 } 00:05:26.498 ] 00:05:26.498 } 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2812950 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2812950 ']' 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2812950 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:26.498 14:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2812950 00:05:26.498 14:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:26.498 14:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:26.498 14:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2812950' 00:05:26.498 killing process with pid 2812950 00:05:26.499 14:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2812950 00:05:26.499 14:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2812950 00:05:26.759 14:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2813290 00:05:26.759 14:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.759 14:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2813290 ']' 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2813290' 00:05:32.044 killing process with pid 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2813290 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.044 00:05:32.044 real 0m6.599s 00:05:32.044 user 0m6.543s 00:05:32.044 sys 0m0.533s 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.044 ************************************ 00:05:32.044 END TEST skip_rpc_with_json 00:05:32.044 ************************************ 00:05:32.044 14:14:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:32.044 14:14:09 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.044 14:14:09 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.044 14:14:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.044 ************************************ 00:05:32.044 START TEST skip_rpc_with_delay 00:05:32.044 ************************************ 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.044 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.044 [2024-06-10 14:14:09.631454] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.044 [2024-06-10 14:14:09.631534] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:32.304 00:05:32.304 real 0m0.077s 00:05:32.304 user 0m0.052s 00:05:32.304 sys 0m0.024s 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.304 14:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.304 ************************************ 00:05:32.304 END TEST skip_rpc_with_delay 00:05:32.304 ************************************ 00:05:32.304 14:14:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.304 14:14:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.304 14:14:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.304 14:14:09 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.304 14:14:09 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.304 14:14:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.304 ************************************ 00:05:32.304 START TEST exit_on_failed_rpc_init 00:05:32.304 ************************************ 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2814353 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2814353 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 2814353 ']' 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.304 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.305 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.305 14:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.305 14:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.305 [2024-06-10 14:14:09.776231] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:32.305 [2024-06-10 14:14:09.776287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814353 ] 00:05:32.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.305 [2024-06-10 14:14:09.856252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.565 [2024-06-10 14:14:09.927575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.136 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.136 [2024-06-10 14:14:10.689579] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:33.136 [2024-06-10 14:14:10.689629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814640 ] 00:05:33.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.397 [2024-06-10 14:14:10.746521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.397 [2024-06-10 14:14:10.811052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.397 [2024-06-10 14:14:10.811110] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.397 [2024-06-10 14:14:10.811119] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.397 [2024-06-10 14:14:10.811126] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2814353 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 2814353 ']' 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 2814353 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:33.397 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2814353 00:05:33.398 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:33.398 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:33.398 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2814353' 00:05:33.398 killing process with pid 2814353 00:05:33.398 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 2814353 00:05:33.398 14:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 2814353 00:05:33.670 00:05:33.670 real 0m1.410s 00:05:33.670 user 0m1.693s 00:05:33.670 sys 0m0.384s 00:05:33.670 14:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.670 14:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 ************************************ 00:05:33.670 END TEST exit_on_failed_rpc_init 00:05:33.670 ************************************ 00:05:33.670 14:14:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.670 00:05:33.670 real 0m13.727s 00:05:33.670 user 0m13.445s 00:05:33.670 sys 0m1.478s 00:05:33.670 14:14:11 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.670 14:14:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 ************************************ 00:05:33.670 END TEST skip_rpc 00:05:33.670 ************************************ 00:05:33.670 14:14:11 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.670 14:14:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.670 14:14:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.670 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 ************************************ 00:05:33.670 START TEST rpc_client 00:05:33.670 ************************************ 00:05:33.670 14:14:11 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.980 * Looking for test storage... 00:05:33.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:33.980 14:14:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.980 OK 00:05:33.980 14:14:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.980 00:05:33.980 real 0m0.128s 00:05:33.980 user 0m0.057s 00:05:33.980 sys 0m0.079s 00:05:33.980 14:14:11 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.980 14:14:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.980 ************************************ 00:05:33.980 END TEST rpc_client 00:05:33.980 ************************************ 00:05:33.980 14:14:11 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.980 14:14:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.980 14:14:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.980 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:05:33.980 ************************************ 00:05:33.980 START TEST json_config 00:05:33.980 ************************************ 00:05:33.980 14:14:11 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.980 14:14:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.980 14:14:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.980 14:14:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.980 14:14:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.980 14:14:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.980 14:14:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.981 14:14:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.981 14:14:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.981 14:14:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.981 14:14:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.981 14:14:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.981 14:14:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.981 14:14:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.981 14:14:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@47 -- # : 0 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.981 14:14:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:33.981 INFO: JSON configuration test init 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.981 14:14:11 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.981 14:14:11 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.981 14:14:11 json_config -- json_config/common.sh@10 -- # shift 00:05:33.981 14:14:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.981 14:14:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.981 14:14:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.981 14:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.981 14:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.981 14:14:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2814814 00:05:33.981 14:14:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.981 Waiting for target to run... 00:05:33.981 14:14:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2814814 /var/tmp/spdk_tgt.sock 00:05:33.981 14:14:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@830 -- # '[' -z 2814814 ']' 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:33.981 14:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.243 [2024-06-10 14:14:11.609794] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:34.243 [2024-06-10 14:14:11.609844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814814 ] 00:05:34.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.503 [2024-06-10 14:14:11.897448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.503 [2024-06-10 14:14:11.956814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:35.073 14:14:12 json_config -- json_config/common.sh@26 -- # echo '' 00:05:35.073 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:35.073 14:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:35.073 14:14:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:35.073 14:14:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:35.645 14:14:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.645 14:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:35.645 14:14:13 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:35.645 14:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:35.907 14:14:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:35.907 14:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:35.907 14:14:13 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:35.907 14:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:35.907 14:14:13 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.907 14:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.169 MallocForNvmf0 00:05:36.169 14:14:13 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.169 14:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.169 MallocForNvmf1 00:05:36.169 14:14:13 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.169 14:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.430 [2024-06-10 14:14:13.885935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.430 14:14:13 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.430 14:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.690 14:14:14 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.690 14:14:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.951 14:14:14 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.951 14:14:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.951 14:14:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.951 14:14:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:37.211 [2024-06-10 14:14:14.676378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.211 14:14:14 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:37.211 14:14:14 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:37.211 14:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.211 14:14:14 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:37.211 14:14:14 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:37.211 14:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.211 14:14:14 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:37.211 14:14:14 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.212 14:14:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.472 MallocBdevForConfigChangeCheck 00:05:37.472 14:14:14 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:37.472 14:14:14 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:37.472 14:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.472 14:14:15 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:37.472 14:14:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.043 14:14:15 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:38.043 INFO: shutting down applications... 00:05:38.043 14:14:15 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:38.043 14:14:15 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:38.043 14:14:15 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:38.043 14:14:15 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:38.303 Calling clear_iscsi_subsystem 00:05:38.303 Calling clear_nvmf_subsystem 00:05:38.303 Calling clear_nbd_subsystem 00:05:38.303 Calling clear_ublk_subsystem 00:05:38.303 Calling clear_vhost_blk_subsystem 00:05:38.303 Calling clear_vhost_scsi_subsystem 00:05:38.303 Calling clear_bdev_subsystem 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:38.303 14:14:15 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.564 14:14:16 json_config -- json_config/json_config.sh@345 -- # break 00:05:38.564 14:14:16 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:38.564 14:14:16 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:38.564 14:14:16 json_config -- json_config/common.sh@31 -- # local app=target 00:05:38.564 14:14:16 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.564 14:14:16 json_config -- json_config/common.sh@35 -- # [[ -n 2814814 ]] 00:05:38.564 14:14:16 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2814814 00:05:38.564 14:14:16 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.564 14:14:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.564 14:14:16 json_config -- json_config/common.sh@41 -- # kill -0 2814814 00:05:38.564 14:14:16 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.137 14:14:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.137 14:14:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.137 14:14:16 json_config -- json_config/common.sh@41 -- # kill -0 2814814 00:05:39.137 14:14:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.137 14:14:16 json_config -- json_config/common.sh@43 -- # break 00:05:39.137 14:14:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.137 14:14:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.137 SPDK target shutdown done 00:05:39.137 14:14:16 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.137 INFO: relaunching applications... 00:05:39.137 14:14:16 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.137 14:14:16 json_config -- json_config/common.sh@9 -- # local app=target 00:05:39.137 14:14:16 json_config -- json_config/common.sh@10 -- # shift 00:05:39.137 14:14:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.137 14:14:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.137 14:14:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.137 14:14:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.137 14:14:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.137 14:14:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2815949 00:05:39.137 14:14:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.137 Waiting for target to run... 00:05:39.137 14:14:16 json_config -- json_config/common.sh@25 -- # waitforlisten 2815949 /var/tmp/spdk_tgt.sock 00:05:39.137 14:14:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@830 -- # '[' -z 2815949 ']' 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:39.137 14:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.137 [2024-06-10 14:14:16.680524] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:39.137 [2024-06-10 14:14:16.680580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815949 ] 00:05:39.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.709 [2024-06-10 14:14:17.050178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.709 [2024-06-10 14:14:17.115938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.280 [2024-06-10 14:14:17.608120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.280 [2024-06-10 14:14:17.640469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.280 14:14:17 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:40.280 14:14:17 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:40.280 14:14:17 json_config -- json_config/common.sh@26 -- # echo '' 00:05:40.280 00:05:40.280 14:14:17 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:40.280 14:14:17 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:40.280 INFO: Checking if target configuration is the same... 00:05:40.280 14:14:17 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.280 14:14:17 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:40.280 14:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.280 + '[' 2 -ne 2 ']' 00:05:40.280 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.280 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.280 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.280 +++ basename /dev/fd/62 00:05:40.280 ++ mktemp /tmp/62.XXX 00:05:40.280 + tmp_file_1=/tmp/62.GXh 00:05:40.280 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.280 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.280 + tmp_file_2=/tmp/spdk_tgt_config.json.GJ2 00:05:40.280 + ret=0 00:05:40.280 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.541 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.541 + diff -u /tmp/62.GXh /tmp/spdk_tgt_config.json.GJ2 00:05:40.541 + echo 'INFO: JSON config files are the same' 00:05:40.541 INFO: JSON config files are the same 00:05:40.541 + rm /tmp/62.GXh /tmp/spdk_tgt_config.json.GJ2 00:05:40.541 + exit 0 00:05:40.541 14:14:18 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:40.541 14:14:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:40.541 INFO: changing configuration and checking if this can be detected... 00:05:40.541 14:14:18 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.541 14:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:40.802 14:14:18 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.802 14:14:18 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:40.802 14:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.802 + '[' 2 -ne 2 ']' 00:05:40.802 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:40.802 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:40.802 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:40.802 +++ basename /dev/fd/62 00:05:40.802 ++ mktemp /tmp/62.XXX 00:05:40.802 + tmp_file_1=/tmp/62.VXm 00:05:40.802 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.802 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:40.802 + tmp_file_2=/tmp/spdk_tgt_config.json.1zf 00:05:40.802 + ret=0 00:05:40.802 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.062 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.322 + diff -u /tmp/62.VXm /tmp/spdk_tgt_config.json.1zf 00:05:41.322 + ret=1 00:05:41.322 + echo '=== Start of file: /tmp/62.VXm ===' 00:05:41.322 + cat /tmp/62.VXm 00:05:41.322 + echo '=== End of file: /tmp/62.VXm ===' 00:05:41.322 + echo '' 00:05:41.322 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1zf ===' 00:05:41.322 + cat /tmp/spdk_tgt_config.json.1zf 00:05:41.322 + echo '=== End of file: /tmp/spdk_tgt_config.json.1zf ===' 00:05:41.322 + echo '' 00:05:41.322 + rm /tmp/62.VXm /tmp/spdk_tgt_config.json.1zf 00:05:41.322 + exit 1 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:41.322 INFO: configuration change detected. 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@317 -- # [[ -n 2815949 ]] 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.322 14:14:18 json_config -- json_config/json_config.sh@323 -- # killprocess 2815949 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@949 -- # '[' -z 2815949 ']' 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@953 -- # kill -0 2815949 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@954 -- # uname 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2815949 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:41.322 14:14:18 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2815949' 00:05:41.322 killing process with pid 2815949 00:05:41.323 14:14:18 json_config -- common/autotest_common.sh@968 -- # kill 2815949 00:05:41.323 14:14:18 json_config -- common/autotest_common.sh@973 -- # wait 2815949 00:05:41.582 14:14:19 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.582 14:14:19 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:41.582 14:14:19 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:41.582 14:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.582 14:14:19 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:41.582 14:14:19 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:41.582 INFO: Success 00:05:41.582 00:05:41.582 real 0m7.687s 00:05:41.582 user 0m9.742s 00:05:41.582 sys 0m1.837s 00:05:41.582 14:14:19 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:41.582 14:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.582 ************************************ 00:05:41.582 END TEST json_config 00:05:41.582 ************************************ 00:05:41.582 14:14:19 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.582 14:14:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:41.582 14:14:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:41.582 14:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 ************************************ 00:05:41.843 START TEST json_config_extra_key 00:05:41.843 ************************************ 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.843 14:14:19 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.843 14:14:19 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.843 14:14:19 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.843 14:14:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.843 14:14:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.843 14:14:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.843 14:14:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.843 14:14:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.843 14:14:19 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.843 INFO: launching applications... 00:05:41.843 14:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2816727 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.843 Waiting for target to run... 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2816727 /var/tmp/spdk_tgt.sock 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 2816727 ']' 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.843 14:14:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.843 14:14:19 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:41.844 14:14:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.844 [2024-06-10 14:14:19.344696] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:41.844 [2024-06-10 14:14:19.344756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816727 ] 00:05:41.844 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.104 [2024-06-10 14:14:19.632254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.104 [2024-06-10 14:14:19.691025] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.674 14:14:20 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:42.674 14:14:20 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.674 00:05:42.674 14:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.674 INFO: shutting down applications... 00:05:42.674 14:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2816727 ]] 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2816727 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2816727 00:05:42.674 14:14:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2816727 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.245 14:14:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.245 SPDK target shutdown done 00:05:43.245 14:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.245 Success 00:05:43.245 00:05:43.245 real 0m1.475s 00:05:43.245 user 0m1.156s 00:05:43.245 sys 0m0.383s 00:05:43.245 14:14:20 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.245 14:14:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 ************************************ 00:05:43.245 END TEST json_config_extra_key 00:05:43.245 ************************************ 00:05:43.245 14:14:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.245 14:14:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:43.245 14:14:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.245 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 ************************************ 00:05:43.245 START TEST alias_rpc 00:05:43.245 ************************************ 00:05:43.245 14:14:20 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.246 * Looking for test storage... 00:05:43.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.246 14:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.246 14:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2817104 00:05:43.246 14:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2817104 00:05:43.246 14:14:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 2817104 ']' 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:43.246 14:14:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.507 [2024-06-10 14:14:20.861284] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:43.507 [2024-06-10 14:14:20.861364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817104 ] 00:05:43.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.507 [2024-06-10 14:14:20.942141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.507 [2024-06-10 14:14:21.013944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.077 14:14:21 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:44.077 14:14:21 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:44.077 14:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:44.338 14:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2817104 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 2817104 ']' 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 2817104 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2817104 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2817104' 00:05:44.338 killing process with pid 2817104 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@968 -- # kill 2817104 00:05:44.338 14:14:21 alias_rpc -- common/autotest_common.sh@973 -- # wait 2817104 00:05:44.598 00:05:44.598 real 0m1.395s 00:05:44.598 user 0m1.553s 00:05:44.598 sys 0m0.392s 00:05:44.598 14:14:22 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.598 14:14:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.598 ************************************ 00:05:44.598 END TEST alias_rpc 00:05:44.598 ************************************ 00:05:44.598 14:14:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:44.598 14:14:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.598 14:14:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.598 14:14:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.598 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:44.859 ************************************ 00:05:44.859 START TEST spdkcli_tcp 00:05:44.859 ************************************ 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.859 * Looking for test storage... 00:05:44.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2817428 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2817428 00:05:44.859 14:14:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 2817428 ']' 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:44.859 14:14:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.859 [2024-06-10 14:14:22.373534] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:44.859 [2024-06-10 14:14:22.373605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817428 ] 00:05:44.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.119 [2024-06-10 14:14:22.455107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.119 [2024-06-10 14:14:22.533717] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.119 [2024-06-10 14:14:22.533723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.690 14:14:23 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:45.690 14:14:23 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:45.690 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2817518 00:05:45.690 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.690 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.951 [ 00:05:45.951 "bdev_malloc_delete", 00:05:45.951 "bdev_malloc_create", 00:05:45.951 "bdev_null_resize", 00:05:45.951 "bdev_null_delete", 00:05:45.951 "bdev_null_create", 00:05:45.951 "bdev_nvme_cuse_unregister", 00:05:45.951 "bdev_nvme_cuse_register", 00:05:45.951 "bdev_opal_new_user", 00:05:45.951 "bdev_opal_set_lock_state", 00:05:45.951 "bdev_opal_delete", 00:05:45.951 "bdev_opal_get_info", 00:05:45.951 "bdev_opal_create", 00:05:45.951 "bdev_nvme_opal_revert", 00:05:45.951 "bdev_nvme_opal_init", 00:05:45.951 "bdev_nvme_send_cmd", 00:05:45.951 "bdev_nvme_get_path_iostat", 00:05:45.951 "bdev_nvme_get_mdns_discovery_info", 00:05:45.951 "bdev_nvme_stop_mdns_discovery", 00:05:45.951 "bdev_nvme_start_mdns_discovery", 00:05:45.951 "bdev_nvme_set_multipath_policy", 00:05:45.951 "bdev_nvme_set_preferred_path", 00:05:45.951 "bdev_nvme_get_io_paths", 00:05:45.951 "bdev_nvme_remove_error_injection", 00:05:45.951 "bdev_nvme_add_error_injection", 00:05:45.951 "bdev_nvme_get_discovery_info", 00:05:45.951 "bdev_nvme_stop_discovery", 00:05:45.951 "bdev_nvme_start_discovery", 00:05:45.951 "bdev_nvme_get_controller_health_info", 00:05:45.951 "bdev_nvme_disable_controller", 00:05:45.951 "bdev_nvme_enable_controller", 00:05:45.951 "bdev_nvme_reset_controller", 00:05:45.951 "bdev_nvme_get_transport_statistics", 00:05:45.951 "bdev_nvme_apply_firmware", 00:05:45.951 "bdev_nvme_detach_controller", 00:05:45.951 "bdev_nvme_get_controllers", 00:05:45.951 "bdev_nvme_attach_controller", 00:05:45.951 "bdev_nvme_set_hotplug", 00:05:45.951 "bdev_nvme_set_options", 00:05:45.951 "bdev_passthru_delete", 00:05:45.951 "bdev_passthru_create", 00:05:45.951 "bdev_lvol_set_parent_bdev", 00:05:45.951 "bdev_lvol_set_parent", 00:05:45.951 "bdev_lvol_check_shallow_copy", 00:05:45.951 "bdev_lvol_start_shallow_copy", 00:05:45.951 "bdev_lvol_grow_lvstore", 00:05:45.951 "bdev_lvol_get_lvols", 00:05:45.951 "bdev_lvol_get_lvstores", 00:05:45.951 "bdev_lvol_delete", 00:05:45.951 "bdev_lvol_set_read_only", 00:05:45.951 "bdev_lvol_resize", 00:05:45.951 "bdev_lvol_decouple_parent", 00:05:45.951 "bdev_lvol_inflate", 00:05:45.951 "bdev_lvol_rename", 00:05:45.951 "bdev_lvol_clone_bdev", 00:05:45.951 "bdev_lvol_clone", 00:05:45.951 "bdev_lvol_snapshot", 00:05:45.951 "bdev_lvol_create", 00:05:45.951 "bdev_lvol_delete_lvstore", 00:05:45.951 "bdev_lvol_rename_lvstore", 00:05:45.951 "bdev_lvol_create_lvstore", 00:05:45.951 "bdev_raid_set_options", 00:05:45.951 "bdev_raid_remove_base_bdev", 00:05:45.951 "bdev_raid_add_base_bdev", 00:05:45.951 "bdev_raid_delete", 00:05:45.951 "bdev_raid_create", 00:05:45.951 "bdev_raid_get_bdevs", 00:05:45.951 "bdev_error_inject_error", 00:05:45.951 "bdev_error_delete", 00:05:45.951 "bdev_error_create", 00:05:45.951 "bdev_split_delete", 00:05:45.951 "bdev_split_create", 00:05:45.951 "bdev_delay_delete", 00:05:45.951 "bdev_delay_create", 00:05:45.951 "bdev_delay_update_latency", 00:05:45.951 "bdev_zone_block_delete", 00:05:45.951 "bdev_zone_block_create", 00:05:45.951 "blobfs_create", 00:05:45.951 "blobfs_detect", 00:05:45.951 "blobfs_set_cache_size", 00:05:45.951 "bdev_aio_delete", 00:05:45.951 "bdev_aio_rescan", 00:05:45.951 "bdev_aio_create", 00:05:45.951 "bdev_ftl_set_property", 00:05:45.951 "bdev_ftl_get_properties", 00:05:45.951 "bdev_ftl_get_stats", 00:05:45.951 "bdev_ftl_unmap", 00:05:45.951 "bdev_ftl_unload", 00:05:45.951 "bdev_ftl_delete", 00:05:45.951 "bdev_ftl_load", 00:05:45.951 "bdev_ftl_create", 00:05:45.951 "bdev_virtio_attach_controller", 00:05:45.951 "bdev_virtio_scsi_get_devices", 00:05:45.951 "bdev_virtio_detach_controller", 00:05:45.951 "bdev_virtio_blk_set_hotplug", 00:05:45.951 "bdev_iscsi_delete", 00:05:45.951 "bdev_iscsi_create", 00:05:45.951 "bdev_iscsi_set_options", 00:05:45.951 "accel_error_inject_error", 00:05:45.951 "ioat_scan_accel_module", 00:05:45.951 "dsa_scan_accel_module", 00:05:45.951 "iaa_scan_accel_module", 00:05:45.951 "vfu_virtio_create_scsi_endpoint", 00:05:45.951 "vfu_virtio_scsi_remove_target", 00:05:45.951 "vfu_virtio_scsi_add_target", 00:05:45.951 "vfu_virtio_create_blk_endpoint", 00:05:45.951 "vfu_virtio_delete_endpoint", 00:05:45.951 "keyring_file_remove_key", 00:05:45.951 "keyring_file_add_key", 00:05:45.951 "keyring_linux_set_options", 00:05:45.951 "iscsi_get_histogram", 00:05:45.951 "iscsi_enable_histogram", 00:05:45.951 "iscsi_set_options", 00:05:45.951 "iscsi_get_auth_groups", 00:05:45.951 "iscsi_auth_group_remove_secret", 00:05:45.951 "iscsi_auth_group_add_secret", 00:05:45.951 "iscsi_delete_auth_group", 00:05:45.951 "iscsi_create_auth_group", 00:05:45.951 "iscsi_set_discovery_auth", 00:05:45.951 "iscsi_get_options", 00:05:45.951 "iscsi_target_node_request_logout", 00:05:45.951 "iscsi_target_node_set_redirect", 00:05:45.951 "iscsi_target_node_set_auth", 00:05:45.951 "iscsi_target_node_add_lun", 00:05:45.951 "iscsi_get_stats", 00:05:45.951 "iscsi_get_connections", 00:05:45.951 "iscsi_portal_group_set_auth", 00:05:45.951 "iscsi_start_portal_group", 00:05:45.951 "iscsi_delete_portal_group", 00:05:45.951 "iscsi_create_portal_group", 00:05:45.951 "iscsi_get_portal_groups", 00:05:45.951 "iscsi_delete_target_node", 00:05:45.951 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.951 "iscsi_target_node_add_pg_ig_maps", 00:05:45.951 "iscsi_create_target_node", 00:05:45.951 "iscsi_get_target_nodes", 00:05:45.951 "iscsi_delete_initiator_group", 00:05:45.951 "iscsi_initiator_group_remove_initiators", 00:05:45.951 "iscsi_initiator_group_add_initiators", 00:05:45.951 "iscsi_create_initiator_group", 00:05:45.952 "iscsi_get_initiator_groups", 00:05:45.952 "nvmf_set_crdt", 00:05:45.952 "nvmf_set_config", 00:05:45.952 "nvmf_set_max_subsystems", 00:05:45.952 "nvmf_stop_mdns_prr", 00:05:45.952 "nvmf_publish_mdns_prr", 00:05:45.952 "nvmf_subsystem_get_listeners", 00:05:45.952 "nvmf_subsystem_get_qpairs", 00:05:45.952 "nvmf_subsystem_get_controllers", 00:05:45.952 "nvmf_get_stats", 00:05:45.952 "nvmf_get_transports", 00:05:45.952 "nvmf_create_transport", 00:05:45.952 "nvmf_get_targets", 00:05:45.952 "nvmf_delete_target", 00:05:45.952 "nvmf_create_target", 00:05:45.952 "nvmf_subsystem_allow_any_host", 00:05:45.952 "nvmf_subsystem_remove_host", 00:05:45.952 "nvmf_subsystem_add_host", 00:05:45.952 "nvmf_ns_remove_host", 00:05:45.952 "nvmf_ns_add_host", 00:05:45.952 "nvmf_subsystem_remove_ns", 00:05:45.952 "nvmf_subsystem_add_ns", 00:05:45.952 "nvmf_subsystem_listener_set_ana_state", 00:05:45.952 "nvmf_discovery_get_referrals", 00:05:45.952 "nvmf_discovery_remove_referral", 00:05:45.952 "nvmf_discovery_add_referral", 00:05:45.952 "nvmf_subsystem_remove_listener", 00:05:45.952 "nvmf_subsystem_add_listener", 00:05:45.952 "nvmf_delete_subsystem", 00:05:45.952 "nvmf_create_subsystem", 00:05:45.952 "nvmf_get_subsystems", 00:05:45.952 "env_dpdk_get_mem_stats", 00:05:45.952 "nbd_get_disks", 00:05:45.952 "nbd_stop_disk", 00:05:45.952 "nbd_start_disk", 00:05:45.952 "ublk_recover_disk", 00:05:45.952 "ublk_get_disks", 00:05:45.952 "ublk_stop_disk", 00:05:45.952 "ublk_start_disk", 00:05:45.952 "ublk_destroy_target", 00:05:45.952 "ublk_create_target", 00:05:45.952 "virtio_blk_create_transport", 00:05:45.952 "virtio_blk_get_transports", 00:05:45.952 "vhost_controller_set_coalescing", 00:05:45.952 "vhost_get_controllers", 00:05:45.952 "vhost_delete_controller", 00:05:45.952 "vhost_create_blk_controller", 00:05:45.952 "vhost_scsi_controller_remove_target", 00:05:45.952 "vhost_scsi_controller_add_target", 00:05:45.952 "vhost_start_scsi_controller", 00:05:45.952 "vhost_create_scsi_controller", 00:05:45.952 "thread_set_cpumask", 00:05:45.952 "framework_get_scheduler", 00:05:45.952 "framework_set_scheduler", 00:05:45.952 "framework_get_reactors", 00:05:45.952 "thread_get_io_channels", 00:05:45.952 "thread_get_pollers", 00:05:45.952 "thread_get_stats", 00:05:45.952 "framework_monitor_context_switch", 00:05:45.952 "spdk_kill_instance", 00:05:45.952 "log_enable_timestamps", 00:05:45.952 "log_get_flags", 00:05:45.952 "log_clear_flag", 00:05:45.952 "log_set_flag", 00:05:45.952 "log_get_level", 00:05:45.952 "log_set_level", 00:05:45.952 "log_get_print_level", 00:05:45.952 "log_set_print_level", 00:05:45.952 "framework_enable_cpumask_locks", 00:05:45.952 "framework_disable_cpumask_locks", 00:05:45.952 "framework_wait_init", 00:05:45.952 "framework_start_init", 00:05:45.952 "scsi_get_devices", 00:05:45.952 "bdev_get_histogram", 00:05:45.952 "bdev_enable_histogram", 00:05:45.952 "bdev_set_qos_limit", 00:05:45.952 "bdev_set_qd_sampling_period", 00:05:45.952 "bdev_get_bdevs", 00:05:45.952 "bdev_reset_iostat", 00:05:45.952 "bdev_get_iostat", 00:05:45.952 "bdev_examine", 00:05:45.952 "bdev_wait_for_examine", 00:05:45.952 "bdev_set_options", 00:05:45.952 "notify_get_notifications", 00:05:45.952 "notify_get_types", 00:05:45.952 "accel_get_stats", 00:05:45.952 "accel_set_options", 00:05:45.952 "accel_set_driver", 00:05:45.952 "accel_crypto_key_destroy", 00:05:45.952 "accel_crypto_keys_get", 00:05:45.952 "accel_crypto_key_create", 00:05:45.952 "accel_assign_opc", 00:05:45.952 "accel_get_module_info", 00:05:45.952 "accel_get_opc_assignments", 00:05:45.952 "vmd_rescan", 00:05:45.952 "vmd_remove_device", 00:05:45.952 "vmd_enable", 00:05:45.952 "sock_get_default_impl", 00:05:45.952 "sock_set_default_impl", 00:05:45.952 "sock_impl_set_options", 00:05:45.952 "sock_impl_get_options", 00:05:45.952 "iobuf_get_stats", 00:05:45.952 "iobuf_set_options", 00:05:45.952 "keyring_get_keys", 00:05:45.952 "framework_get_pci_devices", 00:05:45.952 "framework_get_config", 00:05:45.952 "framework_get_subsystems", 00:05:45.952 "vfu_tgt_set_base_path", 00:05:45.952 "trace_get_info", 00:05:45.952 "trace_get_tpoint_group_mask", 00:05:45.952 "trace_disable_tpoint_group", 00:05:45.952 "trace_enable_tpoint_group", 00:05:45.952 "trace_clear_tpoint_mask", 00:05:45.952 "trace_set_tpoint_mask", 00:05:45.952 "spdk_get_version", 00:05:45.952 "rpc_get_methods" 00:05:45.952 ] 00:05:45.952 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.952 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.952 14:14:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2817428 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 2817428 ']' 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 2817428 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2817428 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2817428' 00:05:45.952 killing process with pid 2817428 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 2817428 00:05:45.952 14:14:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 2817428 00:05:46.213 00:05:46.213 real 0m1.543s 00:05:46.213 user 0m2.937s 00:05:46.213 sys 0m0.447s 00:05:46.213 14:14:23 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.213 14:14:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.213 ************************************ 00:05:46.213 END TEST spdkcli_tcp 00:05:46.213 ************************************ 00:05:46.213 14:14:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.213 14:14:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.213 14:14:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.213 14:14:23 -- common/autotest_common.sh@10 -- # set +x 00:05:46.474 ************************************ 00:05:46.474 START TEST dpdk_mem_utility 00:05:46.475 ************************************ 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.475 * Looking for test storage... 00:05:46.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:46.475 14:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.475 14:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2817802 00:05:46.475 14:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2817802 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 2817802 ']' 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.475 14:14:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.475 14:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.475 [2024-06-10 14:14:23.962902] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:46.475 [2024-06-10 14:14:23.962954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817802 ] 00:05:46.475 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.475 [2024-06-10 14:14:24.036854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.734 [2024-06-10 14:14:24.101661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.306 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.306 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:47.306 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.306 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.306 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:47.306 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.306 { 00:05:47.306 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.306 } 00:05:47.306 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:47.306 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.306 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.306 1 heaps totaling size 814.000000 MiB 00:05:47.306 size: 814.000000 MiB heap id: 0 00:05:47.306 end heaps---------- 00:05:47.306 8 mempools totaling size 598.116089 MiB 00:05:47.306 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.306 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.306 size: 84.521057 MiB name: bdev_io_2817802 00:05:47.306 size: 51.011292 MiB name: evtpool_2817802 00:05:47.306 size: 50.003479 MiB name: msgpool_2817802 00:05:47.306 size: 21.763794 MiB name: PDU_Pool 00:05:47.306 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.306 size: 0.026123 MiB name: Session_Pool 00:05:47.306 end mempools------- 00:05:47.306 6 memzones totaling size 4.142822 MiB 00:05:47.306 size: 1.000366 MiB name: RG_ring_0_2817802 00:05:47.306 size: 1.000366 MiB name: RG_ring_1_2817802 00:05:47.306 size: 1.000366 MiB name: RG_ring_4_2817802 00:05:47.306 size: 1.000366 MiB name: RG_ring_5_2817802 00:05:47.306 size: 0.125366 MiB name: RG_ring_2_2817802 00:05:47.306 size: 0.015991 MiB name: RG_ring_3_2817802 00:05:47.306 end memzones------- 00:05:47.306 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.306 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:47.306 list of free elements. size: 12.519348 MiB 00:05:47.306 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.306 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.306 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.306 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.306 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.306 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.306 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.306 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.306 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:47.306 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:47.306 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:47.306 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:47.306 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.306 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:47.306 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:47.306 list of standard malloc elements. size: 199.218079 MiB 00:05:47.306 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.306 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.306 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.306 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.306 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.306 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.306 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.306 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.306 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.306 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.306 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.306 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.306 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.307 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.307 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.307 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.307 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.307 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.307 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.307 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.307 list of memzone associated elements. size: 602.262573 MiB 00:05:47.307 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.307 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.307 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.307 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.307 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.307 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2817802_0 00:05:47.307 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.307 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2817802_0 00:05:47.307 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.307 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2817802_0 00:05:47.307 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.307 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.307 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.307 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.307 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.307 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2817802 00:05:47.307 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.307 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2817802 00:05:47.307 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.307 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2817802 00:05:47.307 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.307 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.307 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.307 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.307 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.307 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.307 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2817802 00:05:47.307 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.307 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2817802 00:05:47.307 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.307 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2817802 00:05:47.307 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.307 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2817802 00:05:47.307 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.307 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2817802 00:05:47.307 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.307 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.307 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.307 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.307 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.307 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.307 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.307 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2817802 00:05:47.307 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.307 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.307 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:47.307 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.307 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.307 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2817802 00:05:47.307 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:47.307 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.307 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:47.307 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2817802 00:05:47.307 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.307 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2817802 00:05:47.307 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:47.307 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.307 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.307 14:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2817802 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 2817802 ']' 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 2817802 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2817802 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2817802' 00:05:47.307 killing process with pid 2817802 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 2817802 00:05:47.307 14:14:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 2817802 00:05:47.567 00:05:47.567 real 0m1.269s 00:05:47.567 user 0m1.374s 00:05:47.567 sys 0m0.337s 00:05:47.567 14:14:25 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.567 14:14:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.567 ************************************ 00:05:47.567 END TEST dpdk_mem_utility 00:05:47.567 ************************************ 00:05:47.567 14:14:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.567 14:14:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.567 14:14:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.567 14:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:47.567 ************************************ 00:05:47.567 START TEST event 00:05:47.567 ************************************ 00:05:47.567 14:14:25 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.829 * Looking for test storage... 00:05:47.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:47.829 14:14:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:47.829 14:14:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.829 14:14:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.829 14:14:25 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:47.829 14:14:25 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.829 14:14:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.829 ************************************ 00:05:47.829 START TEST event_perf 00:05:47.829 ************************************ 00:05:47.829 14:14:25 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.829 Running I/O for 1 seconds...[2024-06-10 14:14:25.316653] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:47.829 [2024-06-10 14:14:25.316769] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818051 ] 00:05:47.829 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.829 [2024-06-10 14:14:25.398669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.090 [2024-06-10 14:14:25.474380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.090 [2024-06-10 14:14:25.474498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.090 [2024-06-10 14:14:25.474637] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.090 Running I/O for 1 seconds...[2024-06-10 14:14:25.474638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.032 00:05:49.032 lcore 0: 178030 00:05:49.032 lcore 1: 178031 00:05:49.032 lcore 2: 178028 00:05:49.032 lcore 3: 178031 00:05:49.032 done. 00:05:49.032 00:05:49.032 real 0m1.234s 00:05:49.032 user 0m4.138s 00:05:49.032 sys 0m0.094s 00:05:49.032 14:14:26 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.032 14:14:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.032 ************************************ 00:05:49.032 END TEST event_perf 00:05:49.032 ************************************ 00:05:49.032 14:14:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.032 14:14:26 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:49.032 14:14:26 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.032 14:14:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.032 ************************************ 00:05:49.032 START TEST event_reactor 00:05:49.032 ************************************ 00:05:49.032 14:14:26 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.293 [2024-06-10 14:14:26.626915] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:49.293 [2024-06-10 14:14:26.627013] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818332 ] 00:05:49.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.293 [2024-06-10 14:14:26.704696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.293 [2024-06-10 14:14:26.769071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.236 test_start 00:05:50.236 oneshot 00:05:50.236 tick 100 00:05:50.236 tick 100 00:05:50.236 tick 250 00:05:50.236 tick 100 00:05:50.236 tick 100 00:05:50.236 tick 100 00:05:50.236 tick 250 00:05:50.236 tick 500 00:05:50.236 tick 100 00:05:50.236 tick 100 00:05:50.236 tick 250 00:05:50.236 tick 100 00:05:50.236 tick 100 00:05:50.236 test_end 00:05:50.236 00:05:50.236 real 0m1.216s 00:05:50.236 user 0m1.132s 00:05:50.236 sys 0m0.079s 00:05:50.236 14:14:27 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.236 14:14:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:50.236 ************************************ 00:05:50.236 END TEST event_reactor 00:05:50.236 ************************************ 00:05:50.499 14:14:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.500 14:14:27 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:50.500 14:14:27 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.500 14:14:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.500 ************************************ 00:05:50.500 START TEST event_reactor_perf 00:05:50.500 ************************************ 00:05:50.500 14:14:27 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.500 [2024-06-10 14:14:27.919062] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:50.500 [2024-06-10 14:14:27.919162] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818680 ] 00:05:50.500 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.500 [2024-06-10 14:14:28.000759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.500 [2024-06-10 14:14:28.071423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.929 test_start 00:05:51.929 test_end 00:05:51.929 Performance: 371042 events per second 00:05:51.929 00:05:51.929 real 0m1.228s 00:05:51.929 user 0m1.138s 00:05:51.929 sys 0m0.085s 00:05:51.929 14:14:29 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.929 14:14:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.929 ************************************ 00:05:51.929 END TEST event_reactor_perf 00:05:51.929 ************************************ 00:05:51.929 14:14:29 event -- event/event.sh@49 -- # uname -s 00:05:51.929 14:14:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.929 14:14:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.929 14:14:29 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:51.929 14:14:29 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.929 14:14:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.929 ************************************ 00:05:51.929 START TEST event_scheduler 00:05:51.929 ************************************ 00:05:51.929 14:14:29 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.929 * Looking for test storage... 00:05:51.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:51.929 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.929 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2819068 00:05:51.929 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.929 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.930 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2819068 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 2819068 ']' 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.930 14:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.930 [2024-06-10 14:14:29.353595] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:51.930 [2024-06-10 14:14:29.353655] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819068 ] 00:05:51.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.930 [2024-06-10 14:14:29.408188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.930 [2024-06-10 14:14:29.470911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.930 [2024-06-10 14:14:29.471034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.930 [2024-06-10 14:14:29.471189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.930 [2024-06-10 14:14:29.471191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.191 14:14:29 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:52.191 14:14:29 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:52.191 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.191 14:14:29 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.191 14:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.191 POWER: Env isn't set yet! 00:05:52.191 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:52.191 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.191 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.191 POWER: Attempting to initialise PSTAT power management... 00:05:52.191 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:52.191 POWER: Initialized successfully for lcore 0 power management 00:05:52.191 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:52.192 POWER: Initialized successfully for lcore 1 power management 00:05:52.192 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:52.192 POWER: Initialized successfully for lcore 2 power management 00:05:52.192 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:52.192 POWER: Initialized successfully for lcore 3 power management 00:05:52.192 [2024-06-10 14:14:29.573541] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.192 [2024-06-10 14:14:29.573553] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.192 [2024-06-10 14:14:29.573559] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 [2024-06-10 14:14:29.634478] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 ************************************ 00:05:52.192 START TEST scheduler_create_thread 00:05:52.192 ************************************ 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 2 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 3 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 4 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 5 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 6 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 7 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 8 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.192 9 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.192 14:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.579 10 00:05:53.579 14:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:53.579 14:14:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.579 14:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:53.579 14:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.520 14:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.520 14:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.520 14:14:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.520 14:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.520 14:14:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.091 14:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:55.091 14:14:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.091 14:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:55.091 14:14:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.033 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.033 14:14:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.033 14:14:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.033 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.033 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.296 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.296 00:05:56.296 real 0m4.216s 00:05:56.296 user 0m0.024s 00:05:56.296 sys 0m0.007s 00:05:56.296 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.296 14:14:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.296 ************************************ 00:05:56.296 END TEST scheduler_create_thread 00:05:56.296 ************************************ 00:05:56.557 14:14:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.557 14:14:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2819068 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 2819068 ']' 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 2819068 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2819068 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2819068' 00:05:56.557 killing process with pid 2819068 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 2819068 00:05:56.557 14:14:33 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 2819068 00:05:56.817 [2024-06-10 14:14:34.166583] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.817 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:56.817 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:56.817 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:56.817 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:56.817 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:56.817 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:56.817 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:56.817 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:56.817 00:05:56.817 real 0m5.145s 00:05:56.817 user 0m10.896s 00:05:56.817 sys 0m0.313s 00:05:56.817 14:14:34 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.817 14:14:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.817 ************************************ 00:05:56.817 END TEST event_scheduler 00:05:56.817 ************************************ 00:05:56.817 14:14:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.817 14:14:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.817 14:14:34 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.817 14:14:34 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.817 14:14:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.078 ************************************ 00:05:57.078 START TEST app_repeat 00:05:57.078 ************************************ 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2820124 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2820124' 00:05:57.078 Process app_repeat pid: 2820124 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.078 spdk_app_start Round 0 00:05:57.078 14:14:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2820124 /var/tmp/spdk-nbd.sock 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2820124 ']' 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:57.078 14:14:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.078 [2024-06-10 14:14:34.470712] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:05:57.078 [2024-06-10 14:14:34.470780] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820124 ] 00:05:57.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.079 [2024-06-10 14:14:34.549463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.079 [2024-06-10 14:14:34.616294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.079 [2024-06-10 14:14:34.616300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.019 14:14:35 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.019 14:14:35 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:58.019 14:14:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.019 Malloc0 00:05:58.019 14:14:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.280 Malloc1 00:05:58.280 14:14:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.280 14:14:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.541 /dev/nbd0 00:05:58.541 14:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.541 14:14:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.541 1+0 records in 00:05:58.541 1+0 records out 00:05:58.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271711 s, 15.1 MB/s 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:58.541 14:14:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:58.541 14:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.541 14:14:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.541 14:14:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.802 /dev/nbd1 00:05:58.802 14:14:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.802 14:14:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.802 1+0 records in 00:05:58.802 1+0 records out 00:05:58.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280554 s, 14.6 MB/s 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:58.802 14:14:36 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:58.803 14:14:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.803 14:14:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.803 14:14:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.803 14:14:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.803 14:14:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.063 { 00:05:59.063 "nbd_device": "/dev/nbd0", 00:05:59.063 "bdev_name": "Malloc0" 00:05:59.063 }, 00:05:59.063 { 00:05:59.063 "nbd_device": "/dev/nbd1", 00:05:59.063 "bdev_name": "Malloc1" 00:05:59.063 } 00:05:59.063 ]' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.063 { 00:05:59.063 "nbd_device": "/dev/nbd0", 00:05:59.063 "bdev_name": "Malloc0" 00:05:59.063 }, 00:05:59.063 { 00:05:59.063 "nbd_device": "/dev/nbd1", 00:05:59.063 "bdev_name": "Malloc1" 00:05:59.063 } 00:05:59.063 ]' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.063 /dev/nbd1' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.063 /dev/nbd1' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.063 256+0 records in 00:05:59.063 256+0 records out 00:05:59.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125025 s, 83.9 MB/s 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.063 256+0 records in 00:05:59.063 256+0 records out 00:05:59.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157772 s, 66.5 MB/s 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.063 14:14:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.064 256+0 records in 00:05:59.064 256+0 records out 00:05:59.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168222 s, 62.3 MB/s 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.064 14:14:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.325 14:14:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.587 14:14:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.587 14:14:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.587 14:14:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.587 14:14:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.849 14:14:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.849 14:14:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.849 14:14:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.110 [2024-06-10 14:14:37.553893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.110 [2024-06-10 14:14:37.617478] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.110 [2024-06-10 14:14:37.617483] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.110 [2024-06-10 14:14:37.648851] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.110 [2024-06-10 14:14:37.648898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.420 14:14:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.420 14:14:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.420 spdk_app_start Round 1 00:06:03.420 14:14:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2820124 /var/tmp/spdk-nbd.sock 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2820124 ']' 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.420 14:14:40 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:03.420 14:14:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.420 Malloc0 00:06:03.420 14:14:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.685 Malloc1 00:06:03.685 14:14:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.685 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.686 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.686 14:14:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.686 /dev/nbd0 00:06:03.686 14:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.686 14:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.686 1+0 records in 00:06:03.686 1+0 records out 00:06:03.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288351 s, 14.2 MB/s 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:03.686 14:14:41 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.946 14:14:41 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:03.946 14:14:41 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:03.946 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.946 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.946 14:14:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.946 /dev/nbd1 00:06:03.946 14:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.946 14:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.947 1+0 records in 00:06:03.947 1+0 records out 00:06:03.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280698 s, 14.6 MB/s 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:03.947 14:14:41 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:03.947 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.947 14:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.947 14:14:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.947 14:14:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.947 14:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.208 { 00:06:04.208 "nbd_device": "/dev/nbd0", 00:06:04.208 "bdev_name": "Malloc0" 00:06:04.208 }, 00:06:04.208 { 00:06:04.208 "nbd_device": "/dev/nbd1", 00:06:04.208 "bdev_name": "Malloc1" 00:06:04.208 } 00:06:04.208 ]' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.208 { 00:06:04.208 "nbd_device": "/dev/nbd0", 00:06:04.208 "bdev_name": "Malloc0" 00:06:04.208 }, 00:06:04.208 { 00:06:04.208 "nbd_device": "/dev/nbd1", 00:06:04.208 "bdev_name": "Malloc1" 00:06:04.208 } 00:06:04.208 ]' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.208 /dev/nbd1' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.208 /dev/nbd1' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.208 256+0 records in 00:06:04.208 256+0 records out 00:06:04.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123112 s, 85.2 MB/s 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.208 14:14:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.469 256+0 records in 00:06:04.469 256+0 records out 00:06:04.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158425 s, 66.2 MB/s 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.469 256+0 records in 00:06:04.469 256+0 records out 00:06:04.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017603 s, 59.6 MB/s 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.469 14:14:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.469 14:14:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.730 14:14:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.731 14:14:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.731 14:14:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.731 14:14:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.731 14:14:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.991 14:14:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.991 14:14:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.251 14:14:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.511 [2024-06-10 14:14:42.873125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.511 [2024-06-10 14:14:42.936056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.511 [2024-06-10 14:14:42.936062] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.511 [2024-06-10 14:14:42.968129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.511 [2024-06-10 14:14:42.968164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.813 14:14:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.813 14:14:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.813 spdk_app_start Round 2 00:06:08.813 14:14:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2820124 /var/tmp/spdk-nbd.sock 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2820124 ']' 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:08.813 14:14:45 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:08.813 14:14:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.813 Malloc0 00:06:08.813 14:14:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.813 Malloc1 00:06:08.813 14:14:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.813 14:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.074 /dev/nbd0 00:06:09.074 14:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.074 14:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.074 1+0 records in 00:06:09.074 1+0 records out 00:06:09.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290751 s, 14.1 MB/s 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:09.074 14:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:09.074 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.074 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.074 14:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.335 /dev/nbd1 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.335 1+0 records in 00:06:09.335 1+0 records out 00:06:09.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288484 s, 14.2 MB/s 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:09.335 14:14:46 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.335 14:14:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.596 14:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.596 { 00:06:09.596 "nbd_device": "/dev/nbd0", 00:06:09.596 "bdev_name": "Malloc0" 00:06:09.596 }, 00:06:09.596 { 00:06:09.596 "nbd_device": "/dev/nbd1", 00:06:09.596 "bdev_name": "Malloc1" 00:06:09.596 } 00:06:09.596 ]' 00:06:09.596 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.596 { 00:06:09.596 "nbd_device": "/dev/nbd0", 00:06:09.596 "bdev_name": "Malloc0" 00:06:09.596 }, 00:06:09.596 { 00:06:09.596 "nbd_device": "/dev/nbd1", 00:06:09.596 "bdev_name": "Malloc1" 00:06:09.596 } 00:06:09.596 ]' 00:06:09.596 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.596 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.596 /dev/nbd1' 00:06:09.596 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.597 /dev/nbd1' 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.597 256+0 records in 00:06:09.597 256+0 records out 00:06:09.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119537 s, 87.7 MB/s 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.597 256+0 records in 00:06:09.597 256+0 records out 00:06:09.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208948 s, 50.2 MB/s 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.597 256+0 records in 00:06:09.597 256+0 records out 00:06:09.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170509 s, 61.5 MB/s 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.597 14:14:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.858 14:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.118 14:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.378 14:14:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.379 14:14:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.638 14:14:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.638 [2024-06-10 14:14:48.189713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.898 [2024-06-10 14:14:48.253107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.898 [2024-06-10 14:14:48.253112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.898 [2024-06-10 14:14:48.284543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.899 [2024-06-10 14:14:48.284576] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.480 14:14:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2820124 /var/tmp/spdk-nbd.sock 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2820124 ']' 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.480 14:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:13.741 14:14:51 event.app_repeat -- event/event.sh@39 -- # killprocess 2820124 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 2820124 ']' 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 2820124 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2820124 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2820124' 00:06:13.741 killing process with pid 2820124 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@968 -- # kill 2820124 00:06:13.741 14:14:51 event.app_repeat -- common/autotest_common.sh@973 -- # wait 2820124 00:06:14.002 spdk_app_start is called in Round 0. 00:06:14.002 Shutdown signal received, stop current app iteration 00:06:14.002 Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 reinitialization... 00:06:14.002 spdk_app_start is called in Round 1. 00:06:14.002 Shutdown signal received, stop current app iteration 00:06:14.002 Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 reinitialization... 00:06:14.002 spdk_app_start is called in Round 2. 00:06:14.002 Shutdown signal received, stop current app iteration 00:06:14.002 Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 reinitialization... 00:06:14.002 spdk_app_start is called in Round 3. 00:06:14.002 Shutdown signal received, stop current app iteration 00:06:14.002 14:14:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.002 14:14:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:14.002 00:06:14.002 real 0m16.995s 00:06:14.002 user 0m37.748s 00:06:14.002 sys 0m2.342s 00:06:14.002 14:14:51 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.002 14:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.002 ************************************ 00:06:14.002 END TEST app_repeat 00:06:14.002 ************************************ 00:06:14.002 14:14:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.002 14:14:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:14.002 14:14:51 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.002 14:14:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.002 14:14:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.002 ************************************ 00:06:14.002 START TEST cpu_locks 00:06:14.002 ************************************ 00:06:14.002 14:14:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:14.002 * Looking for test storage... 00:06:14.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:14.263 14:14:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.263 14:14:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.263 14:14:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.263 14:14:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.263 14:14:51 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.263 14:14:51 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.263 14:14:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.263 ************************************ 00:06:14.263 START TEST default_locks 00:06:14.263 ************************************ 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2823714 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2823714 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2823714 ']' 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.263 14:14:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.263 [2024-06-10 14:14:51.696264] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:14.263 [2024-06-10 14:14:51.696331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823714 ] 00:06:14.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.263 [2024-06-10 14:14:51.773832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.263 [2024-06-10 14:14:51.846149] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.204 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.204 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:15.204 14:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2823714 00:06:15.204 14:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2823714 00:06:15.204 14:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.464 lslocks: write error 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2823714 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 2823714 ']' 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 2823714 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2823714 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2823714' 00:06:15.464 killing process with pid 2823714 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 2823714 00:06:15.464 14:14:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 2823714 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2823714 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2823714 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2823714 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2823714 ']' 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2823714) - No such process 00:06:15.725 ERROR: process (pid: 2823714) is no longer running 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.725 00:06:15.725 real 0m1.487s 00:06:15.725 user 0m1.656s 00:06:15.725 sys 0m0.478s 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.725 14:14:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 ************************************ 00:06:15.725 END TEST default_locks 00:06:15.725 ************************************ 00:06:15.725 14:14:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.725 14:14:53 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.725 14:14:53 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.725 14:14:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 ************************************ 00:06:15.725 START TEST default_locks_via_rpc 00:06:15.725 ************************************ 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2824080 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2824080 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2824080 ']' 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.725 14:14:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.725 [2024-06-10 14:14:53.259407] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:15.725 [2024-06-10 14:14:53.259471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824080 ] 00:06:15.725 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.986 [2024-06-10 14:14:53.338955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.986 [2024-06-10 14:14:53.411723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2824080 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2824080 00:06:16.558 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2824080 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 2824080 ']' 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 2824080 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2824080 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2824080' 00:06:17.128 killing process with pid 2824080 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 2824080 00:06:17.128 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 2824080 00:06:17.389 00:06:17.389 real 0m1.685s 00:06:17.389 user 0m1.863s 00:06:17.389 sys 0m0.557s 00:06:17.389 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.389 14:14:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.389 ************************************ 00:06:17.389 END TEST default_locks_via_rpc 00:06:17.389 ************************************ 00:06:17.389 14:14:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.389 14:14:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.389 14:14:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.389 14:14:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.389 ************************************ 00:06:17.389 START TEST non_locking_app_on_locked_coremask 00:06:17.389 ************************************ 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2824448 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2824448 /var/tmp/spdk.sock 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2824448 ']' 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.389 14:14:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.649 [2024-06-10 14:14:55.017299] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:17.650 [2024-06-10 14:14:55.017357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824448 ] 00:06:17.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.650 [2024-06-10 14:14:55.095681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.650 [2024-06-10 14:14:55.165206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2824706 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2824706 /var/tmp/spdk2.sock 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2824706 ']' 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.592 14:14:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.592 [2024-06-10 14:14:55.924429] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:18.592 [2024-06-10 14:14:55.924482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824706 ] 00:06:18.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.592 [2024-06-10 14:14:56.012076] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.592 [2024-06-10 14:14:56.012105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.592 [2024-06-10 14:14:56.140889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.534 14:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.534 14:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:19.534 14:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2824448 00:06:19.534 14:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.534 14:14:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2824448 00:06:19.793 lslocks: write error 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2824448 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2824448 ']' 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2824448 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2824448 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:19.793 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:20.054 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2824448' 00:06:20.054 killing process with pid 2824448 00:06:20.054 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2824448 00:06:20.054 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2824448 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2824706 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2824706 ']' 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2824706 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2824706 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2824706' 00:06:20.314 killing process with pid 2824706 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2824706 00:06:20.314 14:14:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2824706 00:06:20.576 00:06:20.576 real 0m3.107s 00:06:20.576 user 0m3.537s 00:06:20.576 sys 0m0.891s 00:06:20.576 14:14:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.576 14:14:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.576 ************************************ 00:06:20.576 END TEST non_locking_app_on_locked_coremask 00:06:20.576 ************************************ 00:06:20.576 14:14:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.576 14:14:58 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:20.576 14:14:58 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.576 14:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.576 ************************************ 00:06:20.576 START TEST locking_app_on_unlocked_coremask 00:06:20.576 ************************************ 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2825156 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2825156 /var/tmp/spdk.sock 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2825156 ']' 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:20.576 14:14:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.835 [2024-06-10 14:14:58.201319] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:20.835 [2024-06-10 14:14:58.201367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825156 ] 00:06:20.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.835 [2024-06-10 14:14:58.276385] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.835 [2024-06-10 14:14:58.276410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.835 [2024-06-10 14:14:58.340769] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2825328 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2825328 /var/tmp/spdk2.sock 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2825328 ']' 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.775 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.775 [2024-06-10 14:14:59.081251] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:21.775 [2024-06-10 14:14:59.081302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825328 ] 00:06:21.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.775 [2024-06-10 14:14:59.168727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.775 [2024-06-10 14:14:59.298002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.715 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.715 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:22.715 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2825328 00:06:22.715 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2825328 00:06:22.715 14:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.715 lslocks: write error 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2825156 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2825156 ']' 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2825156 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2825156 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2825156' 00:06:22.715 killing process with pid 2825156 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2825156 00:06:22.715 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2825156 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2825328 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2825328 ']' 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2825328 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2825328 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2825328' 00:06:23.286 killing process with pid 2825328 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2825328 00:06:23.286 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2825328 00:06:23.547 00:06:23.547 real 0m2.792s 00:06:23.548 user 0m3.202s 00:06:23.548 sys 0m0.759s 00:06:23.548 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.548 14:15:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 ************************************ 00:06:23.548 END TEST locking_app_on_unlocked_coremask 00:06:23.548 ************************************ 00:06:23.548 14:15:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.548 14:15:00 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:23.548 14:15:00 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.548 14:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 ************************************ 00:06:23.548 START TEST locking_app_on_locked_coremask 00:06:23.548 ************************************ 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2825904 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2825904 /var/tmp/spdk.sock 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2825904 ']' 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:23.548 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 [2024-06-10 14:15:01.059723] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:23.548 [2024-06-10 14:15:01.059770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825904 ] 00:06:23.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.548 [2024-06-10 14:15:01.135702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.808 [2024-06-10 14:15:01.200569] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2825985 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2825985 /var/tmp/spdk2.sock 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2825985 /var/tmp/spdk2.sock 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2825985 /var/tmp/spdk2.sock 00:06:24.379 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2825985 ']' 00:06:24.380 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.380 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:24.380 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.380 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:24.380 14:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.641 [2024-06-10 14:15:01.986230] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:24.641 [2024-06-10 14:15:01.986282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825985 ] 00:06:24.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.641 [2024-06-10 14:15:02.073574] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2825904 has claimed it. 00:06:24.641 [2024-06-10 14:15:02.073614] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2825985) - No such process 00:06:25.212 ERROR: process (pid: 2825985) is no longer running 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2825904 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2825904 00:06:25.212 14:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.783 lslocks: write error 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2825904 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2825904 ']' 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2825904 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2825904 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2825904' 00:06:25.783 killing process with pid 2825904 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2825904 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2825904 00:06:25.783 00:06:25.783 real 0m2.339s 00:06:25.783 user 0m2.723s 00:06:25.783 sys 0m0.614s 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.783 14:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.783 ************************************ 00:06:25.783 END TEST locking_app_on_locked_coremask 00:06:25.783 ************************************ 00:06:26.044 14:15:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.044 14:15:03 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:26.044 14:15:03 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.044 14:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.044 ************************************ 00:06:26.044 START TEST locking_overlapped_coremask 00:06:26.044 ************************************ 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2826343 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2826343 /var/tmp/spdk.sock 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2826343 ']' 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.044 14:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.044 [2024-06-10 14:15:03.466683] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:26.044 [2024-06-10 14:15:03.466736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826343 ] 00:06:26.044 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.044 [2024-06-10 14:15:03.545624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.044 [2024-06-10 14:15:03.617041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.044 [2024-06-10 14:15:03.617177] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.044 [2024-06-10 14:15:03.617181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2826657 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2826657 /var/tmp/spdk2.sock 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2826657 /var/tmp/spdk2.sock 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2826657 /var/tmp/spdk2.sock 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2826657 ']' 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.988 14:15:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.988 [2024-06-10 14:15:04.353615] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:26.988 [2024-06-10 14:15:04.353666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826657 ] 00:06:26.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.988 [2024-06-10 14:15:04.422973] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2826343 has claimed it. 00:06:26.988 [2024-06-10 14:15:04.423000] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2826657) - No such process 00:06:27.560 ERROR: process (pid: 2826657) is no longer running 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2826343 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 2826343 ']' 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 2826343 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2826343 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2826343' 00:06:27.560 killing process with pid 2826343 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 2826343 00:06:27.560 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 2826343 00:06:27.821 00:06:27.821 real 0m1.883s 00:06:27.821 user 0m5.413s 00:06:27.821 sys 0m0.386s 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.821 ************************************ 00:06:27.821 END TEST locking_overlapped_coremask 00:06:27.821 ************************************ 00:06:27.821 14:15:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.821 14:15:05 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:27.821 14:15:05 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.821 14:15:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.821 ************************************ 00:06:27.821 START TEST locking_overlapped_coremask_via_rpc 00:06:27.821 ************************************ 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2826726 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2826726 /var/tmp/spdk.sock 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2826726 ']' 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:27.821 14:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.082 [2024-06-10 14:15:05.425796] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:28.082 [2024-06-10 14:15:05.425851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826726 ] 00:06:28.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.082 [2024-06-10 14:15:05.505432] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.082 [2024-06-10 14:15:05.505468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.082 [2024-06-10 14:15:05.583249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.082 [2024-06-10 14:15:05.583371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.082 [2024-06-10 14:15:05.583606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.021 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2827053 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2827053 /var/tmp/spdk2.sock 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2827053 ']' 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.022 14:15:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.022 [2024-06-10 14:15:06.350018] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:29.022 [2024-06-10 14:15:06.350074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827053 ] 00:06:29.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.022 [2024-06-10 14:15:06.420538] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.022 [2024-06-10 14:15:06.420562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.022 [2024-06-10 14:15:06.530760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.022 [2024-06-10 14:15:06.530915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.022 [2024-06-10 14:15:06.530918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.977 [2024-06-10 14:15:07.223376] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2826726 has claimed it. 00:06:29.977 request: 00:06:29.977 { 00:06:29.977 "method": "framework_enable_cpumask_locks", 00:06:29.977 "req_id": 1 00:06:29.977 } 00:06:29.977 Got JSON-RPC error response 00:06:29.977 response: 00:06:29.977 { 00:06:29.977 "code": -32603, 00:06:29.977 "message": "Failed to claim CPU core: 2" 00:06:29.977 } 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2826726 /var/tmp/spdk.sock 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2826726 ']' 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2827053 /var/tmp/spdk2.sock 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2827053 ']' 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.977 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.292 00:06:30.292 real 0m2.251s 00:06:30.292 user 0m1.012s 00:06:30.292 sys 0m0.163s 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.292 14:15:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.292 ************************************ 00:06:30.292 END TEST locking_overlapped_coremask_via_rpc 00:06:30.292 ************************************ 00:06:30.292 14:15:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.292 14:15:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2826726 ]] 00:06:30.292 14:15:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2826726 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2826726 ']' 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2826726 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2826726 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2826726' 00:06:30.292 killing process with pid 2826726 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2826726 00:06:30.292 14:15:07 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2826726 00:06:30.553 14:15:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2827053 ]] 00:06:30.553 14:15:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2827053 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2827053 ']' 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2827053 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2827053 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2827053' 00:06:30.553 killing process with pid 2827053 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2827053 00:06:30.553 14:15:07 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2827053 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2826726 ]] 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2826726 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2826726 ']' 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2826726 00:06:30.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2826726) - No such process 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2826726 is not found' 00:06:30.814 Process with pid 2826726 is not found 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2827053 ]] 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2827053 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2827053 ']' 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2827053 00:06:30.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2827053) - No such process 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2827053 is not found' 00:06:30.814 Process with pid 2827053 is not found 00:06:30.814 14:15:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.814 00:06:30.814 real 0m16.676s 00:06:30.814 user 0m29.932s 00:06:30.814 sys 0m4.728s 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.814 14:15:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 END TEST cpu_locks 00:06:30.814 ************************************ 00:06:30.814 00:06:30.814 real 0m43.063s 00:06:30.814 user 1m25.202s 00:06:30.814 sys 0m8.023s 00:06:30.814 14:15:08 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.814 14:15:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 END TEST event 00:06:30.814 ************************************ 00:06:30.814 14:15:08 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:30.814 14:15:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.814 14:15:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.814 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 START TEST thread 00:06:30.814 ************************************ 00:06:30.814 14:15:08 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:30.814 * Looking for test storage... 00:06:30.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:30.814 14:15:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.814 14:15:08 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:30.814 14:15:08 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.814 14:15:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 START TEST thread_poller_perf 00:06:30.814 ************************************ 00:06:30.814 14:15:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.814 [2024-06-10 14:15:08.405705] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:30.814 [2024-06-10 14:15:08.405799] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827496 ] 00:06:31.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.075 [2024-06-10 14:15:08.487425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.075 [2024-06-10 14:15:08.557288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.075 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.456 ====================================== 00:06:32.456 busy:2413146756 (cyc) 00:06:32.456 total_run_count: 288000 00:06:32.456 tsc_hz: 2400000000 (cyc) 00:06:32.456 ====================================== 00:06:32.456 poller_cost: 8378 (cyc), 3490 (nsec) 00:06:32.456 00:06:32.456 real 0m1.237s 00:06:32.456 user 0m1.140s 00:06:32.456 sys 0m0.092s 00:06:32.456 14:15:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.456 14:15:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.456 ************************************ 00:06:32.456 END TEST thread_poller_perf 00:06:32.456 ************************************ 00:06:32.456 14:15:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.456 14:15:09 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:32.456 14:15:09 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.456 14:15:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.456 ************************************ 00:06:32.456 START TEST thread_poller_perf 00:06:32.456 ************************************ 00:06:32.456 14:15:09 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.456 [2024-06-10 14:15:09.708968] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:32.456 [2024-06-10 14:15:09.709062] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828082 ] 00:06:32.456 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.456 [2024-06-10 14:15:09.791074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.456 [2024-06-10 14:15:09.864037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.456 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.394 ====================================== 00:06:33.394 busy:2402146496 (cyc) 00:06:33.394 total_run_count: 3807000 00:06:33.394 tsc_hz: 2400000000 (cyc) 00:06:33.394 ====================================== 00:06:33.394 poller_cost: 630 (cyc), 262 (nsec) 00:06:33.394 00:06:33.394 real 0m1.232s 00:06:33.394 user 0m1.147s 00:06:33.394 sys 0m0.079s 00:06:33.394 14:15:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.394 14:15:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.394 ************************************ 00:06:33.394 END TEST thread_poller_perf 00:06:33.394 ************************************ 00:06:33.394 14:15:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.394 00:06:33.394 real 0m2.671s 00:06:33.394 user 0m2.354s 00:06:33.394 sys 0m0.315s 00:06:33.394 14:15:10 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.394 14:15:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.394 ************************************ 00:06:33.394 END TEST thread 00:06:33.394 ************************************ 00:06:33.654 14:15:10 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:33.654 14:15:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:33.654 14:15:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.654 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.654 ************************************ 00:06:33.654 START TEST accel 00:06:33.654 ************************************ 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:33.654 * Looking for test storage... 00:06:33.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:33.654 14:15:11 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:33.654 14:15:11 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:33.654 14:15:11 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.654 14:15:11 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2828683 00:06:33.654 14:15:11 accel -- accel/accel.sh@63 -- # waitforlisten 2828683 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@830 -- # '[' -z 2828683 ']' 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:33.654 14:15:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.654 14:15:11 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:33.654 14:15:11 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:33.654 14:15:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.654 14:15:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.654 14:15:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.654 14:15:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.654 14:15:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.654 14:15:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:33.654 14:15:11 accel -- accel/accel.sh@41 -- # jq -r . 00:06:33.654 [2024-06-10 14:15:11.161448] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:33.654 [2024-06-10 14:15:11.161506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828683 ] 00:06:33.655 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.655 [2024-06-10 14:15:11.237226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.915 [2024-06-10 14:15:11.303619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.485 14:15:11 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:34.485 14:15:11 accel -- common/autotest_common.sh@863 -- # return 0 00:06:34.485 14:15:11 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:34.485 14:15:11 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:34.485 14:15:11 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:34.485 14:15:11 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:34.485 14:15:11 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.485 14:15:11 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:34.485 14:15:11 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:34.485 14:15:11 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.485 14:15:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.485 14:15:11 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.485 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.485 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.485 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.486 14:15:11 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.486 14:15:11 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.486 14:15:11 accel -- accel/accel.sh@75 -- # killprocess 2828683 00:06:34.486 14:15:11 accel -- common/autotest_common.sh@949 -- # '[' -z 2828683 ']' 00:06:34.486 14:15:11 accel -- common/autotest_common.sh@953 -- # kill -0 2828683 00:06:34.486 14:15:11 accel -- common/autotest_common.sh@954 -- # uname 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2828683 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2828683' 00:06:34.486 killing process with pid 2828683 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@968 -- # kill 2828683 00:06:34.486 14:15:12 accel -- common/autotest_common.sh@973 -- # wait 2828683 00:06:34.746 14:15:12 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:34.746 14:15:12 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:34.746 14:15:12 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:34.746 14:15:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.746 14:15:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.746 14:15:12 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:34.746 14:15:12 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:34.746 14:15:12 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.746 14:15:12 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:35.005 14:15:12 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:35.005 14:15:12 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:35.005 14:15:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.005 14:15:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.005 ************************************ 00:06:35.005 START TEST accel_missing_filename 00:06:35.005 ************************************ 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.005 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:35.005 14:15:12 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:35.005 [2024-06-10 14:15:12.421406] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:35.005 [2024-06-10 14:15:12.421470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828868 ] 00:06:35.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.005 [2024-06-10 14:15:12.498667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.005 [2024-06-10 14:15:12.578308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.266 [2024-06-10 14:15:12.610356] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.266 [2024-06-10 14:15:12.647346] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.266 A filename is required. 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.266 00:06:35.266 real 0m0.305s 00:06:35.266 user 0m0.202s 00:06:35.266 sys 0m0.118s 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.266 14:15:12 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:35.266 ************************************ 00:06:35.266 END TEST accel_missing_filename 00:06:35.266 ************************************ 00:06:35.266 14:15:12 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.266 14:15:12 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:35.266 14:15:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.266 14:15:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.266 ************************************ 00:06:35.266 START TEST accel_compress_verify 00:06:35.266 ************************************ 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.266 14:15:12 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:35.266 14:15:12 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:35.266 [2024-06-10 14:15:12.793686] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:35.266 [2024-06-10 14:15:12.793772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829078 ] 00:06:35.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.526 [2024-06-10 14:15:12.873905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.526 [2024-06-10 14:15:12.947479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.526 [2024-06-10 14:15:12.979768] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.526 [2024-06-10 14:15:13.016693] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.526 00:06:35.526 Compression does not support the verify option, aborting. 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.526 00:06:35.526 real 0m0.304s 00:06:35.526 user 0m0.219s 00:06:35.526 sys 0m0.125s 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.526 14:15:13 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.526 ************************************ 00:06:35.526 END TEST accel_compress_verify 00:06:35.526 ************************************ 00:06:35.526 14:15:13 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.526 14:15:13 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:35.526 14:15:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.526 14:15:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.787 ************************************ 00:06:35.787 START TEST accel_wrong_workload 00:06:35.787 ************************************ 00:06:35.787 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:35.787 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:35.788 14:15:13 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:35.788 Unsupported workload type: foobar 00:06:35.788 [2024-06-10 14:15:13.156785] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.788 accel_perf options: 00:06:35.788 [-h help message] 00:06:35.788 [-q queue depth per core] 00:06:35.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.788 [-T number of threads per core 00:06:35.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.788 [-t time in seconds] 00:06:35.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.788 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.788 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.788 [-S for crc32c workload, use this seed value (default 0) 00:06:35.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.788 [-f for fill workload, use this BYTE value (default 255) 00:06:35.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.788 [-y verify result if this switch is on] 00:06:35.788 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.788 Can be used to spread operations across a wider range of memory. 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.788 00:06:35.788 real 0m0.031s 00:06:35.788 user 0m0.034s 00:06:35.788 sys 0m0.014s 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.788 14:15:13 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 END TEST accel_wrong_workload 00:06:35.788 ************************************ 00:06:35.788 14:15:13 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 START TEST accel_negative_buffers 00:06:35.788 ************************************ 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:35.788 14:15:13 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:35.788 -x option must be non-negative. 00:06:35.788 [2024-06-10 14:15:13.252386] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:35.788 accel_perf options: 00:06:35.788 [-h help message] 00:06:35.788 [-q queue depth per core] 00:06:35.788 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.788 [-T number of threads per core 00:06:35.788 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.788 [-t time in seconds] 00:06:35.788 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.788 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.788 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.788 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.788 [-S for crc32c workload, use this seed value (default 0) 00:06:35.788 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.788 [-f for fill workload, use this BYTE value (default 255) 00:06:35.788 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.788 [-y verify result if this switch is on] 00:06:35.788 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.788 Can be used to spread operations across a wider range of memory. 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.788 00:06:35.788 real 0m0.032s 00:06:35.788 user 0m0.018s 00:06:35.788 sys 0m0.014s 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.788 14:15:13 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 END TEST accel_negative_buffers 00:06:35.788 ************************************ 00:06:35.788 Error: writing output failed: Broken pipe 00:06:35.788 14:15:13 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.788 14:15:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.788 ************************************ 00:06:35.788 START TEST accel_crc32c 00:06:35.788 ************************************ 00:06:35.788 14:15:13 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:35.788 14:15:13 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.788 [2024-06-10 14:15:13.356114] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:35.788 [2024-06-10 14:15:13.356178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829158 ] 00:06:36.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.048 [2024-06-10 14:15:13.433321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.048 [2024-06-10 14:15:13.499919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.048 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.049 14:15:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.433 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.434 14:15:14 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.434 00:06:37.434 real 0m1.296s 00:06:37.434 user 0m1.180s 00:06:37.434 sys 0m0.118s 00:06:37.434 14:15:14 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.434 14:15:14 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.434 ************************************ 00:06:37.434 END TEST accel_crc32c 00:06:37.434 ************************************ 00:06:37.434 14:15:14 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.434 14:15:14 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:37.434 14:15:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.434 14:15:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.434 ************************************ 00:06:37.434 START TEST accel_crc32c_C2 00:06:37.434 ************************************ 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.434 [2024-06-10 14:15:14.717045] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:37.434 [2024-06-10 14:15:14.717121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829508 ] 00:06:37.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.434 [2024-06-10 14:15:14.796537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.434 [2024-06-10 14:15:14.866269] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.434 14:15:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.815 00:06:38.815 real 0m1.301s 00:06:38.815 user 0m1.183s 00:06:38.815 sys 0m0.120s 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.815 14:15:15 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 END TEST accel_crc32c_C2 00:06:38.815 ************************************ 00:06:38.815 14:15:16 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:38.815 14:15:16 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:38.815 14:15:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.815 14:15:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 START TEST accel_copy 00:06:38.815 ************************************ 00:06:38.815 14:15:16 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:38.815 [2024-06-10 14:15:16.081919] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:38.815 [2024-06-10 14:15:16.082003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829808 ] 00:06:38.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.815 [2024-06-10 14:15:16.162072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.815 [2024-06-10 14:15:16.230091] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.815 14:15:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.198 14:15:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:40.199 14:15:17 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.199 00:06:40.199 real 0m1.301s 00:06:40.199 user 0m1.185s 00:06:40.199 sys 0m0.117s 00:06:40.199 14:15:17 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.199 14:15:17 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.199 ************************************ 00:06:40.199 END TEST accel_copy 00:06:40.199 ************************************ 00:06:40.199 14:15:17 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.199 14:15:17 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:40.199 14:15:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.199 14:15:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.199 ************************************ 00:06:40.199 START TEST accel_fill 00:06:40.199 ************************************ 00:06:40.199 14:15:17 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:40.199 [2024-06-10 14:15:17.449384] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:40.199 [2024-06-10 14:15:17.449444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830002 ] 00:06:40.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.199 [2024-06-10 14:15:17.526175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.199 [2024-06-10 14:15:17.594956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.199 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.200 14:15:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:41.140 14:15:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.140 00:06:41.140 real 0m1.298s 00:06:41.140 user 0m1.188s 00:06:41.140 sys 0m0.112s 00:06:41.140 14:15:18 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.140 14:15:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:41.140 ************************************ 00:06:41.140 END TEST accel_fill 00:06:41.140 ************************************ 00:06:41.400 14:15:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:41.400 14:15:18 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:41.400 14:15:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.400 14:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.400 ************************************ 00:06:41.400 START TEST accel_copy_crc32c 00:06:41.400 ************************************ 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:41.400 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:41.400 [2024-06-10 14:15:18.810156] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:41.400 [2024-06-10 14:15:18.810219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830244 ] 00:06:41.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.400 [2024-06-10 14:15:18.890312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.400 [2024-06-10 14:15:18.962241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.661 14:15:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.602 00:06:42.602 real 0m1.307s 00:06:42.602 user 0m1.190s 00:06:42.602 sys 0m0.118s 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.602 14:15:20 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:42.602 ************************************ 00:06:42.602 END TEST accel_copy_crc32c 00:06:42.602 ************************************ 00:06:42.602 14:15:20 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.602 14:15:20 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:42.602 14:15:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.602 14:15:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.602 ************************************ 00:06:42.602 START TEST accel_copy_crc32c_C2 00:06:42.602 ************************************ 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.602 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.603 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.603 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.603 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.603 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:42.603 [2024-06-10 14:15:20.195023] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:42.603 [2024-06-10 14:15:20.195083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830602 ] 00:06:42.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.863 [2024-06-10 14:15:20.273840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.863 [2024-06-10 14:15:20.339884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.863 14:15:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.248 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.248 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.249 00:06:44.249 real 0m1.299s 00:06:44.249 user 0m1.191s 00:06:44.249 sys 0m0.110s 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.249 14:15:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:44.249 ************************************ 00:06:44.249 END TEST accel_copy_crc32c_C2 00:06:44.249 ************************************ 00:06:44.249 14:15:21 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:44.249 14:15:21 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:44.249 14:15:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.249 14:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.249 ************************************ 00:06:44.249 START TEST accel_dualcast 00:06:44.249 ************************************ 00:06:44.249 14:15:21 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:44.249 [2024-06-10 14:15:21.565294] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:44.249 [2024-06-10 14:15:21.565382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830949 ] 00:06:44.249 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.249 [2024-06-10 14:15:21.641938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.249 [2024-06-10 14:15:21.706868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.249 14:15:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:45.635 14:15:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.635 00:06:45.635 real 0m1.296s 00:06:45.635 user 0m1.185s 00:06:45.635 sys 0m0.112s 00:06:45.635 14:15:22 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.635 14:15:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:45.635 ************************************ 00:06:45.635 END TEST accel_dualcast 00:06:45.635 ************************************ 00:06:45.635 14:15:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.635 14:15:22 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:45.635 14:15:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.635 14:15:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.635 ************************************ 00:06:45.635 START TEST accel_compare 00:06:45.635 ************************************ 00:06:45.635 14:15:22 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:45.635 14:15:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:45.635 [2024-06-10 14:15:22.922811] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:45.636 [2024-06-10 14:15:22.922869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831298 ] 00:06:45.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.636 [2024-06-10 14:15:23.000516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.636 [2024-06-10 14:15:23.074990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.636 14:15:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.023 14:15:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.023 00:06:47.023 real 0m1.306s 00:06:47.023 user 0m1.184s 00:06:47.023 sys 0m0.124s 00:06:47.023 14:15:24 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.023 14:15:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:47.023 ************************************ 00:06:47.023 END TEST accel_compare 00:06:47.023 ************************************ 00:06:47.023 14:15:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.023 14:15:24 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:47.023 14:15:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.023 14:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.023 ************************************ 00:06:47.023 START TEST accel_xor 00:06:47.023 ************************************ 00:06:47.023 14:15:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:47.023 [2024-06-10 14:15:24.289205] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:47.023 [2024-06-10 14:15:24.289267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831505 ] 00:06:47.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.023 [2024-06-10 14:15:24.367996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.023 [2024-06-10 14:15:24.438277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.023 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.024 14:15:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.408 00:06:48.408 real 0m1.301s 00:06:48.408 user 0m1.184s 00:06:48.408 sys 0m0.119s 00:06:48.408 14:15:25 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.408 14:15:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:48.408 ************************************ 00:06:48.408 END TEST accel_xor 00:06:48.408 ************************************ 00:06:48.408 14:15:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.408 14:15:25 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:48.408 14:15:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.408 14:15:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.408 ************************************ 00:06:48.408 START TEST accel_xor 00:06:48.408 ************************************ 00:06:48.408 14:15:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:48.408 [2024-06-10 14:15:25.654759] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:48.408 [2024-06-10 14:15:25.654819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831709 ] 00:06:48.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.408 [2024-06-10 14:15:25.730924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.408 [2024-06-10 14:15:25.799226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.408 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 14:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.400 14:15:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.400 00:06:49.400 real 0m1.298s 00:06:49.400 user 0m1.187s 00:06:49.400 sys 0m0.112s 00:06:49.400 14:15:26 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.400 14:15:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 END TEST accel_xor 00:06:49.400 ************************************ 00:06:49.400 14:15:26 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:49.400 14:15:26 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:49.400 14:15:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.400 14:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 START TEST accel_dif_verify 00:06:49.400 ************************************ 00:06:49.401 14:15:26 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:49.401 14:15:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:49.662 [2024-06-10 14:15:27.015674] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:49.662 [2024-06-10 14:15:27.015764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832040 ] 00:06:49.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.662 [2024-06-10 14:15:27.093291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.662 [2024-06-10 14:15:27.160713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.662 14:15:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:51.046 14:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.046 00:06:51.046 real 0m1.299s 00:06:51.046 user 0m1.189s 00:06:51.046 sys 0m0.111s 00:06:51.046 14:15:28 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.046 14:15:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.046 ************************************ 00:06:51.046 END TEST accel_dif_verify 00:06:51.046 ************************************ 00:06:51.046 14:15:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.046 14:15:28 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:51.046 14:15:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.046 14:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.046 ************************************ 00:06:51.046 START TEST accel_dif_generate 00:06:51.046 ************************************ 00:06:51.046 14:15:28 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:51.046 [2024-06-10 14:15:28.374113] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:51.046 [2024-06-10 14:15:28.374174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832397 ] 00:06:51.046 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.046 [2024-06-10 14:15:28.453381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.046 [2024-06-10 14:15:28.523017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.046 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.047 14:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:52.433 14:15:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.433 00:06:52.433 real 0m1.301s 00:06:52.433 user 0m1.195s 00:06:52.433 sys 0m0.107s 00:06:52.433 14:15:29 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.433 14:15:29 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:52.433 ************************************ 00:06:52.433 END TEST accel_dif_generate 00:06:52.433 ************************************ 00:06:52.433 14:15:29 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:52.433 14:15:29 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:52.433 14:15:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.433 14:15:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.433 ************************************ 00:06:52.433 START TEST accel_dif_generate_copy 00:06:52.433 ************************************ 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:52.433 [2024-06-10 14:15:29.740159] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:52.433 [2024-06-10 14:15:29.740219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832744 ] 00:06:52.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.433 [2024-06-10 14:15:29.818533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.433 [2024-06-10 14:15:29.889888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.433 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.434 14:15:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.817 00:06:53.817 real 0m1.302s 00:06:53.817 user 0m1.195s 00:06:53.817 sys 0m0.107s 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.817 14:15:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 END TEST accel_dif_generate_copy 00:06:53.817 ************************************ 00:06:53.817 14:15:31 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:53.817 14:15:31 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.817 14:15:31 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:53.817 14:15:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.817 14:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 START TEST accel_comp 00:06:53.817 ************************************ 00:06:53.817 14:15:31 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:53.817 [2024-06-10 14:15:31.109047] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:53.817 [2024-06-10 14:15:31.109109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833014 ] 00:06:53.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.817 [2024-06-10 14:15:31.187404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.817 [2024-06-10 14:15:31.257683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.817 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.818 14:15:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:55.201 14:15:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.201 00:06:55.201 real 0m1.303s 00:06:55.201 user 0m1.194s 00:06:55.201 sys 0m0.110s 00:06:55.201 14:15:32 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:55.201 14:15:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:55.201 ************************************ 00:06:55.201 END TEST accel_comp 00:06:55.201 ************************************ 00:06:55.201 14:15:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.201 14:15:32 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:55.201 14:15:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:55.201 14:15:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.202 ************************************ 00:06:55.202 START TEST accel_decomp 00:06:55.202 ************************************ 00:06:55.202 14:15:32 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:55.202 [2024-06-10 14:15:32.481476] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:55.202 [2024-06-10 14:15:32.481574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833216 ] 00:06:55.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.202 [2024-06-10 14:15:32.559541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.202 [2024-06-10 14:15:32.631161] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.202 14:15:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.582 14:15:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.582 00:06:56.582 real 0m1.305s 00:06:56.582 user 0m1.190s 00:06:56.582 sys 0m0.116s 00:06:56.582 14:15:33 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.582 14:15:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 ************************************ 00:06:56.582 END TEST accel_decomp 00:06:56.582 ************************************ 00:06:56.582 14:15:33 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.582 14:15:33 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:56.582 14:15:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.582 14:15:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.582 ************************************ 00:06:56.582 START TEST accel_decomp_full 00:06:56.582 ************************************ 00:06:56.582 14:15:33 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:56.582 14:15:33 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:56.582 [2024-06-10 14:15:33.854034] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:56.582 [2024-06-10 14:15:33.854096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833483 ] 00:06:56.582 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.582 [2024-06-10 14:15:33.934118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.582 [2024-06-10 14:15:34.007994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.582 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.583 14:15:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.967 14:15:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.967 00:06:57.967 real 0m1.318s 00:06:57.967 user 0m1.203s 00:06:57.967 sys 0m0.114s 00:06:57.967 14:15:35 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.967 14:15:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:57.967 ************************************ 00:06:57.967 END TEST accel_decomp_full 00:06:57.967 ************************************ 00:06:57.967 14:15:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.967 14:15:35 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:57.967 14:15:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.967 14:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.967 ************************************ 00:06:57.967 START TEST accel_decomp_mcore 00:06:57.967 ************************************ 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.967 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:57.968 [2024-06-10 14:15:35.236665] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:57.968 [2024-06-10 14:15:35.236735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833832 ] 00:06:57.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.968 [2024-06-10 14:15:35.317278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.968 [2024-06-10 14:15:35.396335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.968 [2024-06-10 14:15:35.396420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.968 [2024-06-10 14:15:35.396578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.968 [2024-06-10 14:15:35.396578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.968 14:15:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.356 00:06:59.356 real 0m1.326s 00:06:59.356 user 0m4.448s 00:06:59.356 sys 0m0.122s 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.356 14:15:36 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:59.356 ************************************ 00:06:59.356 END TEST accel_decomp_mcore 00:06:59.356 ************************************ 00:06:59.356 14:15:36 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.356 14:15:36 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:59.356 14:15:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.356 14:15:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.356 ************************************ 00:06:59.356 START TEST accel_decomp_full_mcore 00:06:59.356 ************************************ 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:59.356 [2024-06-10 14:15:36.637875] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:06:59.356 [2024-06-10 14:15:36.637960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834191 ] 00:06:59.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.356 [2024-06-10 14:15:36.718601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.356 [2024-06-10 14:15:36.794096] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.356 [2024-06-10 14:15:36.794233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.356 [2024-06-10 14:15:36.794392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.356 [2024-06-10 14:15:36.794392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.356 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.357 14:15:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.744 00:07:00.744 real 0m1.335s 00:07:00.744 user 0m4.489s 00:07:00.744 sys 0m0.123s 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.744 14:15:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:00.744 ************************************ 00:07:00.744 END TEST accel_decomp_full_mcore 00:07:00.744 ************************************ 00:07:00.744 14:15:37 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.744 14:15:37 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:00.744 14:15:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.744 14:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.744 ************************************ 00:07:00.744 START TEST accel_decomp_mthread 00:07:00.744 ************************************ 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:00.744 [2024-06-10 14:15:38.044817] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:00.744 [2024-06-10 14:15:38.044883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834543 ] 00:07:00.744 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.744 [2024-06-10 14:15:38.124406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.744 [2024-06-10 14:15:38.201260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.744 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.745 14:15:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.131 00:07:02.131 real 0m1.320s 00:07:02.131 user 0m1.215s 00:07:02.131 sys 0m0.117s 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.131 14:15:39 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:02.131 ************************************ 00:07:02.131 END TEST accel_decomp_mthread 00:07:02.131 ************************************ 00:07:02.131 14:15:39 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.131 14:15:39 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:02.131 14:15:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.131 14:15:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.131 ************************************ 00:07:02.131 START TEST accel_decomp_full_mthread 00:07:02.131 ************************************ 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:02.131 [2024-06-10 14:15:39.439179] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:02.131 [2024-06-10 14:15:39.439242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834787 ] 00:07:02.131 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.131 [2024-06-10 14:15:39.518854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.131 [2024-06-10 14:15:39.593916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.131 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.132 14:15:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.518 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.519 00:07:03.519 real 0m1.343s 00:07:03.519 user 0m1.242s 00:07:03.519 sys 0m0.114s 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.519 14:15:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:03.519 ************************************ 00:07:03.519 END TEST accel_decomp_full_mthread 00:07:03.519 ************************************ 00:07:03.519 14:15:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:03.519 14:15:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.519 14:15:40 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:03.519 14:15:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:03.519 14:15:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.519 14:15:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.519 14:15:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.519 14:15:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.519 14:15:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.519 14:15:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.519 14:15:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.519 14:15:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:03.519 14:15:40 accel -- accel/accel.sh@41 -- # jq -r . 00:07:03.519 ************************************ 00:07:03.519 START TEST accel_dif_functional_tests 00:07:03.519 ************************************ 00:07:03.519 14:15:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:03.519 [2024-06-10 14:15:40.878227] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:03.519 [2024-06-10 14:15:40.878276] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835002 ] 00:07:03.519 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.519 [2024-06-10 14:15:40.956202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.519 [2024-06-10 14:15:41.038121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.519 [2024-06-10 14:15:41.038256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.519 [2024-06-10 14:15:41.038260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.519 00:07:03.519 00:07:03.519 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.519 http://cunit.sourceforge.net/ 00:07:03.519 00:07:03.519 00:07:03.519 Suite: accel_dif 00:07:03.519 Test: verify: DIF generated, GUARD check ...passed 00:07:03.519 Test: verify: DIF generated, APPTAG check ...passed 00:07:03.519 Test: verify: DIF generated, REFTAG check ...passed 00:07:03.519 Test: verify: DIF not generated, GUARD check ...[2024-06-10 14:15:41.094837] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.519 passed 00:07:03.519 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 14:15:41.094881] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.519 passed 00:07:03.519 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 14:15:41.094903] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.519 passed 00:07:03.519 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:03.519 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 14:15:41.094952] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:03.519 passed 00:07:03.519 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:03.519 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:03.519 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:03.519 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 14:15:41.095067] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:03.519 passed 00:07:03.519 Test: verify copy: DIF generated, GUARD check ...passed 00:07:03.519 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:03.519 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:03.519 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 14:15:41.095189] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:03.519 passed 00:07:03.519 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 14:15:41.095212] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:03.519 passed 00:07:03.519 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 14:15:41.095233] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:03.519 passed 00:07:03.519 Test: generate copy: DIF generated, GUARD check ...passed 00:07:03.519 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:03.519 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:03.519 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:03.519 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:03.519 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:03.519 Test: generate copy: iovecs-len validate ...[2024-06-10 14:15:41.095435] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:03.519 passed 00:07:03.519 Test: generate copy: buffer alignment validate ...passed 00:07:03.519 00:07:03.519 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.519 suites 1 1 n/a 0 0 00:07:03.519 tests 26 26 26 0 0 00:07:03.519 asserts 115 115 115 0 n/a 00:07:03.519 00:07:03.519 Elapsed time = 0.002 seconds 00:07:03.780 00:07:03.780 real 0m0.380s 00:07:03.780 user 0m0.489s 00:07:03.780 sys 0m0.155s 00:07:03.780 14:15:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.780 14:15:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:03.780 ************************************ 00:07:03.780 END TEST accel_dif_functional_tests 00:07:03.780 ************************************ 00:07:03.780 00:07:03.780 real 0m30.226s 00:07:03.780 user 0m33.456s 00:07:03.780 sys 0m4.253s 00:07:03.780 14:15:41 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.780 14:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.780 ************************************ 00:07:03.780 END TEST accel 00:07:03.780 ************************************ 00:07:03.780 14:15:41 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:03.780 14:15:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:03.780 14:15:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.780 14:15:41 -- common/autotest_common.sh@10 -- # set +x 00:07:03.780 ************************************ 00:07:03.780 START TEST accel_rpc 00:07:03.780 ************************************ 00:07:03.780 14:15:41 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:04.042 * Looking for test storage... 00:07:04.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:04.042 14:15:41 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.042 14:15:41 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2835314 00:07:04.042 14:15:41 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2835314 00:07:04.042 14:15:41 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 2835314 ']' 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:04.042 14:15:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.042 [2024-06-10 14:15:41.475642] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:04.042 [2024-06-10 14:15:41.475712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835314 ] 00:07:04.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.042 [2024-06-10 14:15:41.556002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.042 [2024-06-10 14:15:41.627628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.984 14:15:42 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.984 14:15:42 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:04.984 14:15:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:04.984 14:15:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:04.984 14:15:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:04.984 14:15:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:04.984 14:15:42 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:04.984 14:15:42 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:04.984 14:15:42 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 ************************************ 00:07:04.984 START TEST accel_assign_opcode 00:07:04.984 ************************************ 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 [2024-06-10 14:15:42.369752] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 [2024-06-10 14:15:42.381775] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.984 software 00:07:04.984 00:07:04.984 real 0m0.213s 00:07:04.984 user 0m0.048s 00:07:04.984 sys 0m0.011s 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:04.984 14:15:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:04.984 ************************************ 00:07:04.984 END TEST accel_assign_opcode 00:07:04.984 ************************************ 00:07:05.245 14:15:42 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2835314 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 2835314 ']' 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 2835314 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2835314 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2835314' 00:07:05.245 killing process with pid 2835314 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@968 -- # kill 2835314 00:07:05.245 14:15:42 accel_rpc -- common/autotest_common.sh@973 -- # wait 2835314 00:07:05.506 00:07:05.506 real 0m1.557s 00:07:05.506 user 0m1.700s 00:07:05.506 sys 0m0.432s 00:07:05.506 14:15:42 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.506 14:15:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.506 ************************************ 00:07:05.506 END TEST accel_rpc 00:07:05.506 ************************************ 00:07:05.506 14:15:42 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.506 14:15:42 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:05.506 14:15:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.506 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.506 ************************************ 00:07:05.506 START TEST app_cmdline 00:07:05.506 ************************************ 00:07:05.506 14:15:42 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:05.506 * Looking for test storage... 00:07:05.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.506 14:15:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:05.506 14:15:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2835723 00:07:05.506 14:15:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2835723 00:07:05.506 14:15:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 2835723 ']' 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:05.506 14:15:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.767 [2024-06-10 14:15:43.101952] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:05.767 [2024-06-10 14:15:43.102025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835723 ] 00:07:05.767 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.767 [2024-06-10 14:15:43.181341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.767 [2024-06-10 14:15:43.252922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.711 14:15:43 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:06.711 14:15:43 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:06.711 14:15:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:06.711 { 00:07:06.711 "version": "SPDK v24.09-pre git sha1 28a75b1f3", 00:07:06.711 "fields": { 00:07:06.711 "major": 24, 00:07:06.711 "minor": 9, 00:07:06.711 "patch": 0, 00:07:06.711 "suffix": "-pre", 00:07:06.711 "commit": "28a75b1f3" 00:07:06.711 } 00:07:06.711 } 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:06.711 14:15:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:06.711 14:15:44 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.972 request: 00:07:06.972 { 00:07:06.972 "method": "env_dpdk_get_mem_stats", 00:07:06.972 "req_id": 1 00:07:06.972 } 00:07:06.972 Got JSON-RPC error response 00:07:06.972 response: 00:07:06.972 { 00:07:06.972 "code": -32601, 00:07:06.972 "message": "Method not found" 00:07:06.972 } 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:06.972 14:15:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2835723 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 2835723 ']' 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 2835723 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2835723 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2835723' 00:07:06.972 killing process with pid 2835723 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@968 -- # kill 2835723 00:07:06.972 14:15:44 app_cmdline -- common/autotest_common.sh@973 -- # wait 2835723 00:07:07.233 00:07:07.233 real 0m1.734s 00:07:07.233 user 0m2.186s 00:07:07.233 sys 0m0.439s 00:07:07.233 14:15:44 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.233 14:15:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 END TEST app_cmdline 00:07:07.233 ************************************ 00:07:07.233 14:15:44 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.233 14:15:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:07.233 14:15:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.233 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 START TEST version 00:07:07.233 ************************************ 00:07:07.233 14:15:44 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:07.494 * Looking for test storage... 00:07:07.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:07.494 14:15:44 version -- app/version.sh@17 -- # get_header_version major 00:07:07.494 14:15:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # cut -f2 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.494 14:15:44 version -- app/version.sh@17 -- # major=24 00:07:07.494 14:15:44 version -- app/version.sh@18 -- # get_header_version minor 00:07:07.494 14:15:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # cut -f2 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.494 14:15:44 version -- app/version.sh@18 -- # minor=9 00:07:07.494 14:15:44 version -- app/version.sh@19 -- # get_header_version patch 00:07:07.494 14:15:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # cut -f2 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.494 14:15:44 version -- app/version.sh@19 -- # patch=0 00:07:07.494 14:15:44 version -- app/version.sh@20 -- # get_header_version suffix 00:07:07.494 14:15:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # cut -f2 00:07:07.494 14:15:44 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.494 14:15:44 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.494 14:15:44 version -- app/version.sh@22 -- # version=24.9 00:07:07.494 14:15:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.494 14:15:44 version -- app/version.sh@28 -- # version=24.9rc0 00:07:07.494 14:15:44 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.494 14:15:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.494 14:15:44 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:07.494 14:15:44 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:07.494 00:07:07.494 real 0m0.177s 00:07:07.494 user 0m0.082s 00:07:07.494 sys 0m0.135s 00:07:07.494 14:15:44 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.494 14:15:44 version -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 ************************************ 00:07:07.494 END TEST version 00:07:07.494 ************************************ 00:07:07.494 14:15:44 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:07.494 14:15:44 -- spdk/autotest.sh@198 -- # uname -s 00:07:07.494 14:15:44 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:07.494 14:15:44 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:07.494 14:15:44 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:07.494 14:15:44 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:07.494 14:15:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:07.494 14:15:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:07.494 14:15:44 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:07.494 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 14:15:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:07.494 14:15:45 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:07.494 14:15:45 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:07.494 14:15:45 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:07.494 14:15:45 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:07.494 14:15:45 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:07.494 14:15:45 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.494 14:15:45 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:07.494 14:15:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.494 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 ************************************ 00:07:07.494 START TEST nvmf_tcp 00:07:07.494 ************************************ 00:07:07.494 14:15:45 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.756 * Looking for test storage... 00:07:07.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:07.756 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.756 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.756 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.756 14:15:45 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.757 14:15:45 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.757 14:15:45 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.757 14:15:45 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.757 14:15:45 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:07.757 14:15:45 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:07.757 14:15:45 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:07.757 14:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:07.757 14:15:45 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:07.757 14:15:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:07.757 14:15:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.757 14:15:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 ************************************ 00:07:07.757 START TEST nvmf_example 00:07:07.757 ************************************ 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:07.757 * Looking for test storage... 00:07:07.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:07.757 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:07.758 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.019 14:15:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.652 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.652 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:14.653 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:14.653 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:14.653 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:14.653 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:14.653 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:07:14.914 00:07:14.914 --- 10.0.0.2 ping statistics --- 00:07:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.914 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:07:14.914 00:07:14.914 --- 10.0.0.1 ping statistics --- 00:07:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.914 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2839826 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2839826 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 2839826 ']' 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:14.914 14:15:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:15.856 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.116 14:15:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.116 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:16.116 14:15:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:16.116 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.342 Initializing NVMe Controllers 00:07:28.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.342 Initialization complete. Launching workers. 00:07:28.342 ======================================================== 00:07:28.342 Latency(us) 00:07:28.342 Device Information : IOPS MiB/s Average min max 00:07:28.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16758.80 65.46 3818.52 879.42 15702.07 00:07:28.342 ======================================================== 00:07:28.342 Total : 16758.80 65.46 3818.52 879.42 15702.07 00:07:28.342 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.342 rmmod nvme_tcp 00:07:28.342 rmmod nvme_fabrics 00:07:28.342 rmmod nvme_keyring 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2839826 ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2839826 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 2839826 ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 2839826 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2839826 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2839826' 00:07:28.342 killing process with pid 2839826 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 2839826 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 2839826 00:07:28.342 nvmf threads initialize successfully 00:07:28.342 bdev subsystem init successfully 00:07:28.342 created a nvmf target service 00:07:28.342 create targets's poll groups done 00:07:28.342 all subsystems of target started 00:07:28.342 nvmf target is running 00:07:28.342 all subsystems of target stopped 00:07:28.342 destroy targets's poll groups done 00:07:28.342 destroyed the nvmf target service 00:07:28.342 bdev subsystem finish successfully 00:07:28.342 nvmf threads destroy successfully 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.342 14:16:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.603 00:07:28.603 real 0m20.892s 00:07:28.603 user 0m46.958s 00:07:28.603 sys 0m6.355s 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:28.603 14:16:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.603 ************************************ 00:07:28.603 END TEST nvmf_example 00:07:28.603 ************************************ 00:07:28.603 14:16:06 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:28.603 14:16:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:28.603 14:16:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.603 14:16:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.603 ************************************ 00:07:28.603 START TEST nvmf_filesystem 00:07:28.603 ************************************ 00:07:28.603 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:28.866 * Looking for test storage... 00:07:28.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:28.866 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:28.867 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:28.867 #define SPDK_CONFIG_H 00:07:28.867 #define SPDK_CONFIG_APPS 1 00:07:28.867 #define SPDK_CONFIG_ARCH native 00:07:28.867 #undef SPDK_CONFIG_ASAN 00:07:28.867 #undef SPDK_CONFIG_AVAHI 00:07:28.867 #undef SPDK_CONFIG_CET 00:07:28.867 #define SPDK_CONFIG_COVERAGE 1 00:07:28.867 #define SPDK_CONFIG_CROSS_PREFIX 00:07:28.867 #undef SPDK_CONFIG_CRYPTO 00:07:28.867 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:28.867 #undef SPDK_CONFIG_CUSTOMOCF 00:07:28.867 #undef SPDK_CONFIG_DAOS 00:07:28.867 #define SPDK_CONFIG_DAOS_DIR 00:07:28.867 #define SPDK_CONFIG_DEBUG 1 00:07:28.867 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:28.867 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:28.867 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:28.867 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:28.867 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:28.867 #undef SPDK_CONFIG_DPDK_UADK 00:07:28.867 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:28.867 #define SPDK_CONFIG_EXAMPLES 1 00:07:28.867 #undef SPDK_CONFIG_FC 00:07:28.867 #define SPDK_CONFIG_FC_PATH 00:07:28.867 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:28.867 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:28.867 #undef SPDK_CONFIG_FUSE 00:07:28.867 #undef SPDK_CONFIG_FUZZER 00:07:28.867 #define SPDK_CONFIG_FUZZER_LIB 00:07:28.867 #undef SPDK_CONFIG_GOLANG 00:07:28.867 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:28.867 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:28.867 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:28.867 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:28.867 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:28.867 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:28.867 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:28.867 #define SPDK_CONFIG_IDXD 1 00:07:28.867 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:28.867 #undef SPDK_CONFIG_IPSEC_MB 00:07:28.867 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:28.867 #define SPDK_CONFIG_ISAL 1 00:07:28.867 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:28.867 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:28.867 #define SPDK_CONFIG_LIBDIR 00:07:28.867 #undef SPDK_CONFIG_LTO 00:07:28.867 #define SPDK_CONFIG_MAX_LCORES 00:07:28.867 #define SPDK_CONFIG_NVME_CUSE 1 00:07:28.867 #undef SPDK_CONFIG_OCF 00:07:28.867 #define SPDK_CONFIG_OCF_PATH 00:07:28.867 #define SPDK_CONFIG_OPENSSL_PATH 00:07:28.867 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:28.867 #define SPDK_CONFIG_PGO_DIR 00:07:28.867 #undef SPDK_CONFIG_PGO_USE 00:07:28.867 #define SPDK_CONFIG_PREFIX /usr/local 00:07:28.867 #undef SPDK_CONFIG_RAID5F 00:07:28.867 #undef SPDK_CONFIG_RBD 00:07:28.867 #define SPDK_CONFIG_RDMA 1 00:07:28.867 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:28.867 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:28.867 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:28.867 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:28.867 #define SPDK_CONFIG_SHARED 1 00:07:28.868 #undef SPDK_CONFIG_SMA 00:07:28.868 #define SPDK_CONFIG_TESTS 1 00:07:28.868 #undef SPDK_CONFIG_TSAN 00:07:28.868 #define SPDK_CONFIG_UBLK 1 00:07:28.868 #define SPDK_CONFIG_UBSAN 1 00:07:28.868 #undef SPDK_CONFIG_UNIT_TESTS 00:07:28.868 #undef SPDK_CONFIG_URING 00:07:28.868 #define SPDK_CONFIG_URING_PATH 00:07:28.868 #undef SPDK_CONFIG_URING_ZNS 00:07:28.868 #undef SPDK_CONFIG_USDT 00:07:28.868 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:28.868 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:28.868 #define SPDK_CONFIG_VFIO_USER 1 00:07:28.868 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:28.868 #define SPDK_CONFIG_VHOST 1 00:07:28.868 #define SPDK_CONFIG_VIRTIO 1 00:07:28.868 #undef SPDK_CONFIG_VTUNE 00:07:28.868 #define SPDK_CONFIG_VTUNE_DIR 00:07:28.868 #define SPDK_CONFIG_WERROR 1 00:07:28.868 #define SPDK_CONFIG_WPDK_DIR 00:07:28.868 #undef SPDK_CONFIG_XNVME 00:07:28.868 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:28.868 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:28.869 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2842737 ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2842737 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.hM4PT0 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hM4PT0/tests/target /tmp/spdk.hM4PT0 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956665856 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327763968 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=123764244480 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370968064 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5606723584 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64682106880 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685481984 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864499200 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874194432 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9695232 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64685084672 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685486080 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=401408 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937089024 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937093120 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:28.870 * Looking for test storage... 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:28.870 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=123764244480 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7821316096 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.871 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.132 14:16:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:35.717 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:35.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:35.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:35.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:35.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.718 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:07:35.979 00:07:35.979 --- 10.0.0.2 ping statistics --- 00:07:35.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.979 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:35.979 00:07:35.979 --- 10.0.0.1 ping statistics --- 00:07:35.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.979 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.979 14:16:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.240 ************************************ 00:07:36.240 START TEST nvmf_filesystem_no_in_capsule 00:07:36.240 ************************************ 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2846567 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2846567 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2846567 ']' 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:36.240 14:16:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.240 [2024-06-10 14:16:13.667107] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:36.240 [2024-06-10 14:16:13.667169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.240 [2024-06-10 14:16:13.754267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.500 [2024-06-10 14:16:13.853170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.500 [2024-06-10 14:16:13.853230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.500 [2024-06-10 14:16:13.853238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.500 [2024-06-10 14:16:13.853245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.500 [2024-06-10 14:16:13.853251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.500 [2024-06-10 14:16:13.853382] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.500 [2024-06-10 14:16:13.853528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.500 [2024-06-10 14:16:13.853808] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.500 [2024-06-10 14:16:13.853908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.071 [2024-06-10 14:16:14.598300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.071 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 Malloc1 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 [2024-06-10 14:16:14.730588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:37.333 { 00:07:37.333 "name": "Malloc1", 00:07:37.333 "aliases": [ 00:07:37.333 "32d28248-9bd0-484a-8ef5-53615c41a2ef" 00:07:37.333 ], 00:07:37.333 "product_name": "Malloc disk", 00:07:37.333 "block_size": 512, 00:07:37.333 "num_blocks": 1048576, 00:07:37.333 "uuid": "32d28248-9bd0-484a-8ef5-53615c41a2ef", 00:07:37.333 "assigned_rate_limits": { 00:07:37.333 "rw_ios_per_sec": 0, 00:07:37.333 "rw_mbytes_per_sec": 0, 00:07:37.333 "r_mbytes_per_sec": 0, 00:07:37.333 "w_mbytes_per_sec": 0 00:07:37.333 }, 00:07:37.333 "claimed": true, 00:07:37.333 "claim_type": "exclusive_write", 00:07:37.333 "zoned": false, 00:07:37.333 "supported_io_types": { 00:07:37.333 "read": true, 00:07:37.333 "write": true, 00:07:37.333 "unmap": true, 00:07:37.333 "write_zeroes": true, 00:07:37.333 "flush": true, 00:07:37.333 "reset": true, 00:07:37.333 "compare": false, 00:07:37.333 "compare_and_write": false, 00:07:37.333 "abort": true, 00:07:37.333 "nvme_admin": false, 00:07:37.333 "nvme_io": false 00:07:37.333 }, 00:07:37.333 "memory_domains": [ 00:07:37.333 { 00:07:37.333 "dma_device_id": "system", 00:07:37.333 "dma_device_type": 1 00:07:37.333 }, 00:07:37.333 { 00:07:37.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.333 "dma_device_type": 2 00:07:37.333 } 00:07:37.333 ], 00:07:37.333 "driver_specific": {} 00:07:37.333 } 00:07:37.333 ]' 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.333 14:16:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:39.245 14:16:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:39.245 14:16:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:39.245 14:16:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:39.245 14:16:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:39.245 14:16:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:41.157 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:41.157 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:41.158 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:41.419 14:16:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:41.679 14:16:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.619 ************************************ 00:07:42.619 START TEST filesystem_ext4 00:07:42.619 ************************************ 00:07:42.619 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:42.620 14:16:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:42.620 mke2fs 1.46.5 (30-Dec-2021) 00:07:42.880 Discarding device blocks: 0/522240 done 00:07:42.880 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:42.880 Filesystem UUID: 857918bb-7976-4d8f-9bb8-3aa2363f9703 00:07:42.880 Superblock backups stored on blocks: 00:07:42.880 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:42.880 00:07:42.880 Allocating group tables: 0/64 done 00:07:42.880 Writing inode tables: 0/64 done 00:07:45.487 Creating journal (8192 blocks): done 00:07:45.487 Writing superblocks and filesystem accounting information: 0/64 done 00:07:45.487 00:07:45.487 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:45.747 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2846567 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.320 00:07:46.320 real 0m3.641s 00:07:46.320 user 0m0.025s 00:07:46.320 sys 0m0.049s 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:46.320 ************************************ 00:07:46.320 END TEST filesystem_ext4 00:07:46.320 ************************************ 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:46.320 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.580 ************************************ 00:07:46.580 START TEST filesystem_btrfs 00:07:46.580 ************************************ 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:46.581 14:16:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:46.581 btrfs-progs v6.6.2 00:07:46.581 See https://btrfs.readthedocs.io for more information. 00:07:46.581 00:07:46.581 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:46.581 NOTE: several default settings have changed in version 5.15, please make sure 00:07:46.581 this does not affect your deployments: 00:07:46.581 - DUP for metadata (-m dup) 00:07:46.581 - enabled no-holes (-O no-holes) 00:07:46.581 - enabled free-space-tree (-R free-space-tree) 00:07:46.581 00:07:46.581 Label: (null) 00:07:46.581 UUID: a4dc12b6-5fb3-4069-8d63-4c1a77611279 00:07:46.581 Node size: 16384 00:07:46.581 Sector size: 4096 00:07:46.581 Filesystem size: 510.00MiB 00:07:46.581 Block group profiles: 00:07:46.581 Data: single 8.00MiB 00:07:46.581 Metadata: DUP 32.00MiB 00:07:46.581 System: DUP 8.00MiB 00:07:46.581 SSD detected: yes 00:07:46.581 Zoned device: no 00:07:46.581 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:46.581 Runtime features: free-space-tree 00:07:46.581 Checksum: crc32c 00:07:46.581 Number of devices: 1 00:07:46.581 Devices: 00:07:46.581 ID SIZE PATH 00:07:46.581 1 510.00MiB /dev/nvme0n1p1 00:07:46.581 00:07:46.581 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:46.581 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.151 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2846567 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.152 00:07:47.152 real 0m0.650s 00:07:47.152 user 0m0.015s 00:07:47.152 sys 0m0.073s 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.152 ************************************ 00:07:47.152 END TEST filesystem_btrfs 00:07:47.152 ************************************ 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.152 ************************************ 00:07:47.152 START TEST filesystem_xfs 00:07:47.152 ************************************ 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:47.152 14:16:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:47.152 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:47.152 = sectsz=512 attr=2, projid32bit=1 00:07:47.152 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:47.152 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:47.152 data = bsize=4096 blocks=130560, imaxpct=25 00:07:47.152 = sunit=0 swidth=0 blks 00:07:47.152 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:47.152 log =internal log bsize=4096 blocks=16384, version=2 00:07:47.152 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:47.152 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.093 Discarding blocks...Done. 00:07:48.093 14:16:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:48.093 14:16:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.006 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2846567 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.267 00:07:50.267 real 0m3.003s 00:07:50.267 user 0m0.020s 00:07:50.267 sys 0m0.058s 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.267 ************************************ 00:07:50.267 END TEST filesystem_xfs 00:07:50.267 ************************************ 00:07:50.267 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:50.528 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:50.528 14:16:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.528 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2846567 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2846567 ']' 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2846567 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2846567 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2846567' 00:07:50.789 killing process with pid 2846567 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2846567 00:07:50.789 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2846567 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.051 00:07:51.051 real 0m14.806s 00:07:51.051 user 0m58.346s 00:07:51.051 sys 0m1.115s 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 ************************************ 00:07:51.051 END TEST nvmf_filesystem_no_in_capsule 00:07:51.051 ************************************ 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 ************************************ 00:07:51.051 START TEST nvmf_filesystem_in_capsule 00:07:51.051 ************************************ 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2849668 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2849668 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2849668 ']' 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:51.051 14:16:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.051 [2024-06-10 14:16:28.555583] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:07:51.051 [2024-06-10 14:16:28.555638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.051 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.051 [2024-06-10 14:16:28.640498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.312 [2024-06-10 14:16:28.713293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.312 [2024-06-10 14:16:28.713332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.312 [2024-06-10 14:16:28.713340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.312 [2024-06-10 14:16:28.713346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.312 [2024-06-10 14:16:28.713352] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.312 [2024-06-10 14:16:28.713401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.312 [2024-06-10 14:16:28.713516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.312 [2024-06-10 14:16:28.713677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.312 [2024-06-10 14:16:28.713677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.884 [2024-06-10 14:16:29.429935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:51.884 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 Malloc1 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 [2024-06-10 14:16:29.557309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:52.146 { 00:07:52.146 "name": "Malloc1", 00:07:52.146 "aliases": [ 00:07:52.146 "f2b5b816-d070-4434-9aad-d370dc1419e5" 00:07:52.146 ], 00:07:52.146 "product_name": "Malloc disk", 00:07:52.146 "block_size": 512, 00:07:52.146 "num_blocks": 1048576, 00:07:52.146 "uuid": "f2b5b816-d070-4434-9aad-d370dc1419e5", 00:07:52.146 "assigned_rate_limits": { 00:07:52.146 "rw_ios_per_sec": 0, 00:07:52.146 "rw_mbytes_per_sec": 0, 00:07:52.146 "r_mbytes_per_sec": 0, 00:07:52.146 "w_mbytes_per_sec": 0 00:07:52.146 }, 00:07:52.146 "claimed": true, 00:07:52.146 "claim_type": "exclusive_write", 00:07:52.146 "zoned": false, 00:07:52.146 "supported_io_types": { 00:07:52.146 "read": true, 00:07:52.146 "write": true, 00:07:52.146 "unmap": true, 00:07:52.146 "write_zeroes": true, 00:07:52.146 "flush": true, 00:07:52.146 "reset": true, 00:07:52.146 "compare": false, 00:07:52.146 "compare_and_write": false, 00:07:52.146 "abort": true, 00:07:52.146 "nvme_admin": false, 00:07:52.146 "nvme_io": false 00:07:52.146 }, 00:07:52.146 "memory_domains": [ 00:07:52.146 { 00:07:52.146 "dma_device_id": "system", 00:07:52.146 "dma_device_type": 1 00:07:52.146 }, 00:07:52.146 { 00:07:52.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.146 "dma_device_type": 2 00:07:52.146 } 00:07:52.146 ], 00:07:52.146 "driver_specific": {} 00:07:52.146 } 00:07:52.146 ]' 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:52.146 14:16:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.062 14:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.062 14:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:54.062 14:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.062 14:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:54.062 14:16:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:55.977 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:55.978 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.978 14:16:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:56.921 14:16:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.864 ************************************ 00:07:57.864 START TEST filesystem_in_capsule_ext4 00:07:57.864 ************************************ 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:57.864 14:16:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.864 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.864 Discarding device blocks: 0/522240 done 00:07:57.864 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:57.864 Filesystem UUID: f4ce8ac5-a4a7-4dd4-898a-778ce29f9ea4 00:07:57.864 Superblock backups stored on blocks: 00:07:57.864 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:57.864 00:07:57.864 Allocating group tables: 0/64 done 00:07:57.864 Writing inode tables: 0/64 done 00:07:58.124 Creating journal (8192 blocks): done 00:07:58.956 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:07:58.957 00:07:58.957 14:16:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:58.957 14:16:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2849668 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.527 00:07:59.527 real 0m1.817s 00:07:59.527 user 0m0.031s 00:07:59.527 sys 0m0.040s 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.527 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:59.527 ************************************ 00:07:59.527 END TEST filesystem_in_capsule_ext4 00:07:59.527 ************************************ 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.788 ************************************ 00:07:59.788 START TEST filesystem_in_capsule_btrfs 00:07:59.788 ************************************ 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:59.788 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:00.049 btrfs-progs v6.6.2 00:08:00.049 See https://btrfs.readthedocs.io for more information. 00:08:00.049 00:08:00.049 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:00.049 NOTE: several default settings have changed in version 5.15, please make sure 00:08:00.049 this does not affect your deployments: 00:08:00.049 - DUP for metadata (-m dup) 00:08:00.049 - enabled no-holes (-O no-holes) 00:08:00.049 - enabled free-space-tree (-R free-space-tree) 00:08:00.049 00:08:00.049 Label: (null) 00:08:00.049 UUID: c661db0a-3f48-4601-ab1b-2149ca848358 00:08:00.049 Node size: 16384 00:08:00.049 Sector size: 4096 00:08:00.049 Filesystem size: 510.00MiB 00:08:00.049 Block group profiles: 00:08:00.049 Data: single 8.00MiB 00:08:00.049 Metadata: DUP 32.00MiB 00:08:00.049 System: DUP 8.00MiB 00:08:00.049 SSD detected: yes 00:08:00.049 Zoned device: no 00:08:00.049 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:00.049 Runtime features: free-space-tree 00:08:00.049 Checksum: crc32c 00:08:00.049 Number of devices: 1 00:08:00.049 Devices: 00:08:00.049 ID SIZE PATH 00:08:00.049 1 510.00MiB /dev/nvme0n1p1 00:08:00.049 00:08:00.049 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:08:00.049 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.049 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.050 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2849668 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.310 00:08:00.310 real 0m0.509s 00:08:00.310 user 0m0.015s 00:08:00.310 sys 0m0.069s 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 ************************************ 00:08:00.310 END TEST filesystem_in_capsule_btrfs 00:08:00.310 ************************************ 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 ************************************ 00:08:00.310 START TEST filesystem_in_capsule_xfs 00:08:00.310 ************************************ 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:08:00.310 14:16:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:00.310 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:00.310 = sectsz=512 attr=2, projid32bit=1 00:08:00.310 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:00.310 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:00.310 data = bsize=4096 blocks=130560, imaxpct=25 00:08:00.310 = sunit=0 swidth=0 blks 00:08:00.310 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:00.310 log =internal log bsize=4096 blocks=16384, version=2 00:08:00.310 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:00.310 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.250 Discarding blocks...Done. 00:08:01.250 14:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:01.250 14:16:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.159 00:08:03.159 real 0m2.575s 00:08:03.159 user 0m0.022s 00:08:03.159 sys 0m0.055s 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.159 ************************************ 00:08:03.159 END TEST filesystem_in_capsule_xfs 00:08:03.159 ************************************ 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2849668 ']' 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2849668' 00:08:03.159 killing process with pid 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2849668 00:08:03.159 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2849668 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:03.419 00:08:03.419 real 0m12.443s 00:08:03.419 user 0m48.949s 00:08:03.419 sys 0m1.123s 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.419 ************************************ 00:08:03.419 END TEST nvmf_filesystem_in_capsule 00:08:03.419 ************************************ 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.419 14:16:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.419 rmmod nvme_tcp 00:08:03.419 rmmod nvme_fabrics 00:08:03.678 rmmod nvme_keyring 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.679 14:16:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.671 14:16:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.671 00:08:05.671 real 0m36.923s 00:08:05.671 user 1m49.445s 00:08:05.671 sys 0m7.691s 00:08:05.671 14:16:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.671 14:16:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.671 ************************************ 00:08:05.671 END TEST nvmf_filesystem 00:08:05.671 ************************************ 00:08:05.671 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.671 14:16:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:05.671 14:16:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.671 14:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.671 ************************************ 00:08:05.671 START TEST nvmf_target_discovery 00:08:05.671 ************************************ 00:08:05.671 14:16:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.933 * Looking for test storage... 00:08:05.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.933 14:16:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.516 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.516 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.516 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.516 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:12.517 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:12.517 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:12.517 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:12.517 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.517 14:16:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.517 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.517 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:08:12.778 00:08:12.778 --- 10.0.0.2 ping statistics --- 00:08:12.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.778 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:08:12.778 00:08:12.778 --- 10.0.0.1 ping statistics --- 00:08:12.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.778 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2856396 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2856396 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2856396 ']' 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.778 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:12.779 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.779 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:12.779 14:16:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.779 [2024-06-10 14:16:50.339313] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:08:12.779 [2024-06-10 14:16:50.339390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.040 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.040 [2024-06-10 14:16:50.428647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.040 [2024-06-10 14:16:50.524827] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.040 [2024-06-10 14:16:50.524887] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.040 [2024-06-10 14:16:50.524896] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.040 [2024-06-10 14:16:50.524903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.040 [2024-06-10 14:16:50.524909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.040 [2024-06-10 14:16:50.525051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.040 [2024-06-10 14:16:50.525195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.040 [2024-06-10 14:16:50.525247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.040 [2024-06-10 14:16:50.525248] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.611 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:13.611 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:13.611 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.611 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:13.611 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.871 [2024-06-10 14:16:51.217966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.871 Null1 00:08:13.871 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 [2024-06-10 14:16:51.274286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 Null2 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 Null3 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 Null4 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.872 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:14.132 00:08:14.132 Discovery Log Number of Records 6, Generation counter 6 00:08:14.132 =====Discovery Log Entry 0====== 00:08:14.132 trtype: tcp 00:08:14.132 adrfam: ipv4 00:08:14.132 subtype: current discovery subsystem 00:08:14.132 treq: not required 00:08:14.132 portid: 0 00:08:14.132 trsvcid: 4420 00:08:14.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.132 traddr: 10.0.0.2 00:08:14.132 eflags: explicit discovery connections, duplicate discovery information 00:08:14.132 sectype: none 00:08:14.132 =====Discovery Log Entry 1====== 00:08:14.132 trtype: tcp 00:08:14.132 adrfam: ipv4 00:08:14.132 subtype: nvme subsystem 00:08:14.132 treq: not required 00:08:14.132 portid: 0 00:08:14.132 trsvcid: 4420 00:08:14.132 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:14.132 traddr: 10.0.0.2 00:08:14.133 eflags: none 00:08:14.133 sectype: none 00:08:14.133 =====Discovery Log Entry 2====== 00:08:14.133 trtype: tcp 00:08:14.133 adrfam: ipv4 00:08:14.133 subtype: nvme subsystem 00:08:14.133 treq: not required 00:08:14.133 portid: 0 00:08:14.133 trsvcid: 4420 00:08:14.133 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:14.133 traddr: 10.0.0.2 00:08:14.133 eflags: none 00:08:14.133 sectype: none 00:08:14.133 =====Discovery Log Entry 3====== 00:08:14.133 trtype: tcp 00:08:14.133 adrfam: ipv4 00:08:14.133 subtype: nvme subsystem 00:08:14.133 treq: not required 00:08:14.133 portid: 0 00:08:14.133 trsvcid: 4420 00:08:14.133 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:14.133 traddr: 10.0.0.2 00:08:14.133 eflags: none 00:08:14.133 sectype: none 00:08:14.133 =====Discovery Log Entry 4====== 00:08:14.133 trtype: tcp 00:08:14.133 adrfam: ipv4 00:08:14.133 subtype: nvme subsystem 00:08:14.133 treq: not required 00:08:14.133 portid: 0 00:08:14.133 trsvcid: 4420 00:08:14.133 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:14.133 traddr: 10.0.0.2 00:08:14.133 eflags: none 00:08:14.133 sectype: none 00:08:14.133 =====Discovery Log Entry 5====== 00:08:14.133 trtype: tcp 00:08:14.133 adrfam: ipv4 00:08:14.133 subtype: discovery subsystem referral 00:08:14.133 treq: not required 00:08:14.133 portid: 0 00:08:14.133 trsvcid: 4430 00:08:14.133 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.133 traddr: 10.0.0.2 00:08:14.133 eflags: none 00:08:14.133 sectype: none 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:14.133 Perform nvmf subsystem discovery via RPC 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 [ 00:08:14.133 { 00:08:14.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:14.133 "subtype": "Discovery", 00:08:14.133 "listen_addresses": [ 00:08:14.133 { 00:08:14.133 "trtype": "TCP", 00:08:14.133 "adrfam": "IPv4", 00:08:14.133 "traddr": "10.0.0.2", 00:08:14.133 "trsvcid": "4420" 00:08:14.133 } 00:08:14.133 ], 00:08:14.133 "allow_any_host": true, 00:08:14.133 "hosts": [] 00:08:14.133 }, 00:08:14.133 { 00:08:14.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.133 "subtype": "NVMe", 00:08:14.133 "listen_addresses": [ 00:08:14.133 { 00:08:14.133 "trtype": "TCP", 00:08:14.133 "adrfam": "IPv4", 00:08:14.133 "traddr": "10.0.0.2", 00:08:14.133 "trsvcid": "4420" 00:08:14.133 } 00:08:14.133 ], 00:08:14.133 "allow_any_host": true, 00:08:14.133 "hosts": [], 00:08:14.133 "serial_number": "SPDK00000000000001", 00:08:14.133 "model_number": "SPDK bdev Controller", 00:08:14.133 "max_namespaces": 32, 00:08:14.133 "min_cntlid": 1, 00:08:14.133 "max_cntlid": 65519, 00:08:14.133 "namespaces": [ 00:08:14.133 { 00:08:14.133 "nsid": 1, 00:08:14.133 "bdev_name": "Null1", 00:08:14.133 "name": "Null1", 00:08:14.133 "nguid": "790C7FB0EAD147F8B08A9D7A80CAD879", 00:08:14.133 "uuid": "790c7fb0-ead1-47f8-b08a-9d7a80cad879" 00:08:14.133 } 00:08:14.133 ] 00:08:14.133 }, 00:08:14.133 { 00:08:14.133 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:14.133 "subtype": "NVMe", 00:08:14.133 "listen_addresses": [ 00:08:14.133 { 00:08:14.133 "trtype": "TCP", 00:08:14.133 "adrfam": "IPv4", 00:08:14.133 "traddr": "10.0.0.2", 00:08:14.133 "trsvcid": "4420" 00:08:14.133 } 00:08:14.133 ], 00:08:14.133 "allow_any_host": true, 00:08:14.133 "hosts": [], 00:08:14.133 "serial_number": "SPDK00000000000002", 00:08:14.133 "model_number": "SPDK bdev Controller", 00:08:14.133 "max_namespaces": 32, 00:08:14.133 "min_cntlid": 1, 00:08:14.133 "max_cntlid": 65519, 00:08:14.133 "namespaces": [ 00:08:14.133 { 00:08:14.133 "nsid": 1, 00:08:14.133 "bdev_name": "Null2", 00:08:14.133 "name": "Null2", 00:08:14.133 "nguid": "F02A373043F14EA78437E1F36DE6CE96", 00:08:14.133 "uuid": "f02a3730-43f1-4ea7-8437-e1f36de6ce96" 00:08:14.133 } 00:08:14.133 ] 00:08:14.133 }, 00:08:14.133 { 00:08:14.133 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:14.133 "subtype": "NVMe", 00:08:14.133 "listen_addresses": [ 00:08:14.133 { 00:08:14.133 "trtype": "TCP", 00:08:14.133 "adrfam": "IPv4", 00:08:14.133 "traddr": "10.0.0.2", 00:08:14.133 "trsvcid": "4420" 00:08:14.133 } 00:08:14.133 ], 00:08:14.133 "allow_any_host": true, 00:08:14.133 "hosts": [], 00:08:14.133 "serial_number": "SPDK00000000000003", 00:08:14.133 "model_number": "SPDK bdev Controller", 00:08:14.133 "max_namespaces": 32, 00:08:14.133 "min_cntlid": 1, 00:08:14.133 "max_cntlid": 65519, 00:08:14.133 "namespaces": [ 00:08:14.133 { 00:08:14.133 "nsid": 1, 00:08:14.133 "bdev_name": "Null3", 00:08:14.133 "name": "Null3", 00:08:14.133 "nguid": "4D855B21B56C4C5A890F584EB2E27583", 00:08:14.133 "uuid": "4d855b21-b56c-4c5a-890f-584eb2e27583" 00:08:14.133 } 00:08:14.133 ] 00:08:14.133 }, 00:08:14.133 { 00:08:14.133 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:14.133 "subtype": "NVMe", 00:08:14.133 "listen_addresses": [ 00:08:14.133 { 00:08:14.133 "trtype": "TCP", 00:08:14.133 "adrfam": "IPv4", 00:08:14.133 "traddr": "10.0.0.2", 00:08:14.133 "trsvcid": "4420" 00:08:14.133 } 00:08:14.133 ], 00:08:14.133 "allow_any_host": true, 00:08:14.133 "hosts": [], 00:08:14.133 "serial_number": "SPDK00000000000004", 00:08:14.133 "model_number": "SPDK bdev Controller", 00:08:14.133 "max_namespaces": 32, 00:08:14.133 "min_cntlid": 1, 00:08:14.133 "max_cntlid": 65519, 00:08:14.133 "namespaces": [ 00:08:14.133 { 00:08:14.133 "nsid": 1, 00:08:14.133 "bdev_name": "Null4", 00:08:14.133 "name": "Null4", 00:08:14.133 "nguid": "36F057439C0F41B7B071D58DF87FA1F7", 00:08:14.133 "uuid": "36f05743-9c0f-41b7-b071-d58df87fa1f7" 00:08:14.133 } 00:08:14.133 ] 00:08:14.133 } 00:08:14.133 ] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.133 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.393 rmmod nvme_tcp 00:08:14.393 rmmod nvme_fabrics 00:08:14.393 rmmod nvme_keyring 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2856396 ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2856396 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2856396 ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2856396 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2856396 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2856396' 00:08:14.393 killing process with pid 2856396 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2856396 00:08:14.393 14:16:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2856396 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.657 14:16:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.566 14:16:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.566 00:08:16.566 real 0m10.901s 00:08:16.566 user 0m8.144s 00:08:16.566 sys 0m5.536s 00:08:16.566 14:16:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:16.566 14:16:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.566 ************************************ 00:08:16.566 END TEST nvmf_target_discovery 00:08:16.566 ************************************ 00:08:16.566 14:16:54 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.567 14:16:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:16.567 14:16:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.567 14:16:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.828 ************************************ 00:08:16.828 START TEST nvmf_referrals 00:08:16.828 ************************************ 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.828 * Looking for test storage... 00:08:16.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.828 14:16:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:24.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:24.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:24.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:24.973 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.973 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:08:24.974 00:08:24.974 --- 10.0.0.2 ping statistics --- 00:08:24.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.974 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:08:24.974 00:08:24.974 --- 10.0.0.1 ping statistics --- 00:08:24.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.974 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2861049 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2861049 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2861049 ']' 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:24.974 14:17:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 [2024-06-10 14:17:01.540880] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:08:24.974 [2024-06-10 14:17:01.540946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.974 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.974 [2024-06-10 14:17:01.613539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.974 [2024-06-10 14:17:01.689814] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.974 [2024-06-10 14:17:01.689851] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.974 [2024-06-10 14:17:01.689858] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.974 [2024-06-10 14:17:01.689864] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.974 [2024-06-10 14:17:01.689870] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.974 [2024-06-10 14:17:01.691334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.974 [2024-06-10 14:17:01.691420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.974 [2024-06-10 14:17:01.691698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.974 [2024-06-10 14:17:01.691699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 [2024-06-10 14:17:02.471198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 [2024-06-10 14:17:02.487374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.235 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.496 14:17:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.496 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.756 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.017 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.018 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.278 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.539 14:17:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.539 rmmod nvme_tcp 00:08:26.539 rmmod nvme_fabrics 00:08:26.539 rmmod nvme_keyring 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2861049 ']' 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2861049 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2861049 ']' 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2861049 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2861049 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2861049' 00:08:26.539 killing process with pid 2861049 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2861049 00:08:26.539 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2861049 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.801 14:17:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.715 14:17:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.977 00:08:28.977 real 0m12.140s 00:08:28.977 user 0m13.204s 00:08:28.977 sys 0m5.881s 00:08:28.977 14:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.977 14:17:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.977 ************************************ 00:08:28.977 END TEST nvmf_referrals 00:08:28.977 ************************************ 00:08:28.977 14:17:06 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.977 14:17:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:28.977 14:17:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.977 14:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.977 ************************************ 00:08:28.977 START TEST nvmf_connect_disconnect 00:08:28.977 ************************************ 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.977 * Looking for test storage... 00:08:28.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.977 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.978 14:17:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.122 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:37.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:37.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:37.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:37.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:08:37.123 00:08:37.123 --- 10.0.0.2 ping statistics --- 00:08:37.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.123 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:37.123 00:08:37.123 --- 10.0.0.1 ping statistics --- 00:08:37.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.123 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2865850 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2865850 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2865850 ']' 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:37.123 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.124 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:37.124 14:17:13 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.124 [2024-06-10 14:17:13.827771] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:08:37.124 [2024-06-10 14:17:13.827833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.124 [2024-06-10 14:17:13.915928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.124 [2024-06-10 14:17:14.015050] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.124 [2024-06-10 14:17:14.015110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.124 [2024-06-10 14:17:14.015118] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.124 [2024-06-10 14:17:14.015125] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.124 [2024-06-10 14:17:14.015131] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.124 [2024-06-10 14:17:14.015272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.124 [2024-06-10 14:17:14.015414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.124 [2024-06-10 14:17:14.015716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.124 [2024-06-10 14:17:14.015719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.124 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:37.124 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:37.124 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.124 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:37.124 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 [2024-06-10 14:17:14.744068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.418 [2024-06-10 14:17:14.803412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:37.418 14:17:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:41.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.754 rmmod nvme_tcp 00:08:55.754 rmmod nvme_fabrics 00:08:55.754 rmmod nvme_keyring 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2865850 ']' 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2865850 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2865850 ']' 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2865850 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:55.754 14:17:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2865850 00:08:55.754 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:55.754 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:55.754 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2865850' 00:08:55.754 killing process with pid 2865850 00:08:55.754 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2865850 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2865850 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.755 14:17:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.668 14:17:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.668 00:08:57.668 real 0m28.821s 00:08:57.668 user 1m18.419s 00:08:57.668 sys 0m6.446s 00:08:57.668 14:17:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:57.668 14:17:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:57.668 ************************************ 00:08:57.668 END TEST nvmf_connect_disconnect 00:08:57.668 ************************************ 00:08:57.668 14:17:35 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:57.668 14:17:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:57.668 14:17:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:57.668 14:17:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.930 ************************************ 00:08:57.930 START TEST nvmf_multitarget 00:08:57.930 ************************************ 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:57.930 * Looking for test storage... 00:08:57.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.930 14:17:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.523 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:04.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:04.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:04.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:04.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:09:04.524 00:09:04.524 --- 10.0.0.2 ping statistics --- 00:09:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.524 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:09:04.524 00:09:04.524 --- 10.0.0.1 ping statistics --- 00:09:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.524 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.524 14:17:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2873653 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2873653 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2873653 ']' 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:04.524 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.524 [2024-06-10 14:17:42.072388] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:09:04.524 [2024-06-10 14:17:42.072436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.524 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.784 [2024-06-10 14:17:42.157672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.784 [2024-06-10 14:17:42.252579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.784 [2024-06-10 14:17:42.252629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.784 [2024-06-10 14:17:42.252637] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.784 [2024-06-10 14:17:42.252644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.784 [2024-06-10 14:17:42.252650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.784 [2024-06-10 14:17:42.252779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.784 [2024-06-10 14:17:42.252926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.784 [2024-06-10 14:17:42.253097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.784 [2024-06-10 14:17:42.253098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.356 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:05.356 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:09:05.356 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.356 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:05.356 14:17:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:05.617 14:17:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.617 14:17:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:05.617 14:17:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:05.617 14:17:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:05.617 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:05.617 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:05.617 "nvmf_tgt_1" 00:09:05.617 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:05.876 "nvmf_tgt_2" 00:09:05.876 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:05.876 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:05.876 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:05.876 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:06.136 true 00:09:06.136 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:06.136 true 00:09:06.136 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:06.136 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.396 rmmod nvme_tcp 00:09:06.396 rmmod nvme_fabrics 00:09:06.396 rmmod nvme_keyring 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.396 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2873653 ']' 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2873653 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2873653 ']' 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2873653 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2873653 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2873653' 00:09:06.397 killing process with pid 2873653 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2873653 00:09:06.397 14:17:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2873653 00:09:06.693 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.694 14:17:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.614 14:17:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.614 00:09:08.614 real 0m10.802s 00:09:08.614 user 0m9.803s 00:09:08.614 sys 0m5.398s 00:09:08.614 14:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:08.614 14:17:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:08.614 ************************************ 00:09:08.614 END TEST nvmf_multitarget 00:09:08.614 ************************************ 00:09:08.614 14:17:46 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:08.614 14:17:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:08.614 14:17:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:08.614 14:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.614 ************************************ 00:09:08.614 START TEST nvmf_rpc 00:09:08.614 ************************************ 00:09:08.614 14:17:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:08.875 * Looking for test storage... 00:09:08.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.875 14:17:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.014 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:17.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:17.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:17.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:17.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:09:17.015 00:09:17.015 --- 10.0.0.2 ping statistics --- 00:09:17.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.015 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:09:17.015 00:09:17.015 --- 10.0.0.1 ping statistics --- 00:09:17.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.015 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2878336 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2878336 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2878336 ']' 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:17.015 14:17:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.015 [2024-06-10 14:17:53.594088] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:09:17.015 [2024-06-10 14:17:53.594148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.015 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.015 [2024-06-10 14:17:53.681199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.015 [2024-06-10 14:17:53.777376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.015 [2024-06-10 14:17:53.777429] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.015 [2024-06-10 14:17:53.777438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.015 [2024-06-10 14:17:53.777446] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.015 [2024-06-10 14:17:53.777452] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.015 [2024-06-10 14:17:53.777608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.015 [2024-06-10 14:17:53.777864] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.015 [2024-06-10 14:17:53.778034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.016 [2024-06-10 14:17:53.778036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:17.016 "tick_rate": 2400000000, 00:09:17.016 "poll_groups": [ 00:09:17.016 { 00:09:17.016 "name": "nvmf_tgt_poll_group_000", 00:09:17.016 "admin_qpairs": 0, 00:09:17.016 "io_qpairs": 0, 00:09:17.016 "current_admin_qpairs": 0, 00:09:17.016 "current_io_qpairs": 0, 00:09:17.016 "pending_bdev_io": 0, 00:09:17.016 "completed_nvme_io": 0, 00:09:17.016 "transports": [] 00:09:17.016 }, 00:09:17.016 { 00:09:17.016 "name": "nvmf_tgt_poll_group_001", 00:09:17.016 "admin_qpairs": 0, 00:09:17.016 "io_qpairs": 0, 00:09:17.016 "current_admin_qpairs": 0, 00:09:17.016 "current_io_qpairs": 0, 00:09:17.016 "pending_bdev_io": 0, 00:09:17.016 "completed_nvme_io": 0, 00:09:17.016 "transports": [] 00:09:17.016 }, 00:09:17.016 { 00:09:17.016 "name": "nvmf_tgt_poll_group_002", 00:09:17.016 "admin_qpairs": 0, 00:09:17.016 "io_qpairs": 0, 00:09:17.016 "current_admin_qpairs": 0, 00:09:17.016 "current_io_qpairs": 0, 00:09:17.016 "pending_bdev_io": 0, 00:09:17.016 "completed_nvme_io": 0, 00:09:17.016 "transports": [] 00:09:17.016 }, 00:09:17.016 { 00:09:17.016 "name": "nvmf_tgt_poll_group_003", 00:09:17.016 "admin_qpairs": 0, 00:09:17.016 "io_qpairs": 0, 00:09:17.016 "current_admin_qpairs": 0, 00:09:17.016 "current_io_qpairs": 0, 00:09:17.016 "pending_bdev_io": 0, 00:09:17.016 "completed_nvme_io": 0, 00:09:17.016 "transports": [] 00:09:17.016 } 00:09:17.016 ] 00:09:17.016 }' 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:17.016 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.276 [2024-06-10 14:17:54.624451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.276 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:17.276 "tick_rate": 2400000000, 00:09:17.276 "poll_groups": [ 00:09:17.276 { 00:09:17.276 "name": "nvmf_tgt_poll_group_000", 00:09:17.276 "admin_qpairs": 0, 00:09:17.276 "io_qpairs": 0, 00:09:17.276 "current_admin_qpairs": 0, 00:09:17.276 "current_io_qpairs": 0, 00:09:17.276 "pending_bdev_io": 0, 00:09:17.276 "completed_nvme_io": 0, 00:09:17.276 "transports": [ 00:09:17.276 { 00:09:17.277 "trtype": "TCP" 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 }, 00:09:17.277 { 00:09:17.277 "name": "nvmf_tgt_poll_group_001", 00:09:17.277 "admin_qpairs": 0, 00:09:17.277 "io_qpairs": 0, 00:09:17.277 "current_admin_qpairs": 0, 00:09:17.277 "current_io_qpairs": 0, 00:09:17.277 "pending_bdev_io": 0, 00:09:17.277 "completed_nvme_io": 0, 00:09:17.277 "transports": [ 00:09:17.277 { 00:09:17.277 "trtype": "TCP" 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 }, 00:09:17.277 { 00:09:17.277 "name": "nvmf_tgt_poll_group_002", 00:09:17.277 "admin_qpairs": 0, 00:09:17.277 "io_qpairs": 0, 00:09:17.277 "current_admin_qpairs": 0, 00:09:17.277 "current_io_qpairs": 0, 00:09:17.277 "pending_bdev_io": 0, 00:09:17.277 "completed_nvme_io": 0, 00:09:17.277 "transports": [ 00:09:17.277 { 00:09:17.277 "trtype": "TCP" 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 }, 00:09:17.277 { 00:09:17.277 "name": "nvmf_tgt_poll_group_003", 00:09:17.277 "admin_qpairs": 0, 00:09:17.277 "io_qpairs": 0, 00:09:17.277 "current_admin_qpairs": 0, 00:09:17.277 "current_io_qpairs": 0, 00:09:17.277 "pending_bdev_io": 0, 00:09:17.277 "completed_nvme_io": 0, 00:09:17.277 "transports": [ 00:09:17.277 { 00:09:17.277 "trtype": "TCP" 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 } 00:09:17.277 ] 00:09:17.277 }' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 Malloc1 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 [2024-06-10 14:17:54.816257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:17.277 [2024-06-10 14:17:54.842836] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:17.277 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:17.277 could not add new controller: failed to write to nvme-fabrics device 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:17.277 14:17:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.187 14:17:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.187 14:17:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:19.187 14:17:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.187 14:17:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:19.187 14:17:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.096 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.097 [2024-06-10 14:17:58.450958] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:21.097 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:21.097 could not add new controller: failed to write to nvme-fabrics device 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:21.097 14:17:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.479 14:17:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.479 14:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:22.479 14:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.479 14:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:22.479 14:17:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:24.390 14:18:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.649 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.650 [2024-06-10 14:18:02.091328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.650 14:18:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.558 14:18:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.558 14:18:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:26.558 14:18:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.558 14:18:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:26.558 14:18:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 [2024-06-10 14:18:05.788095] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:28.468 14:18:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.854 14:18:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.854 14:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:29.854 14:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.854 14:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:29.854 14:18:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:31.761 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:31.761 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:31.761 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 [2024-06-10 14:18:09.490615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.021 14:18:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.931 14:18:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.931 14:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:33.931 14:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.931 14:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:33.931 14:18:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.876 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.877 [2024-06-10 14:18:13.193178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:35.877 14:18:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.258 14:18:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.258 14:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:37.258 14:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.258 14:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:37.258 14:18:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:39.168 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 [2024-06-10 14:18:16.901501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:39.428 14:18:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.339 14:18:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.339 14:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:41.339 14:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.339 14:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:41.339 14:18:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 [2024-06-10 14:18:20.599099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 [2024-06-10 14:18:20.659260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 [2024-06-10 14:18:20.723467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.252 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 [2024-06-10 14:18:20.783650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.253 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 [2024-06-10 14:18:20.843871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.513 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:43.514 "tick_rate": 2400000000, 00:09:43.514 "poll_groups": [ 00:09:43.514 { 00:09:43.514 "name": "nvmf_tgt_poll_group_000", 00:09:43.514 "admin_qpairs": 0, 00:09:43.514 "io_qpairs": 224, 00:09:43.514 "current_admin_qpairs": 0, 00:09:43.514 "current_io_qpairs": 0, 00:09:43.514 "pending_bdev_io": 0, 00:09:43.514 "completed_nvme_io": 361, 00:09:43.514 "transports": [ 00:09:43.514 { 00:09:43.514 "trtype": "TCP" 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 }, 00:09:43.514 { 00:09:43.514 "name": "nvmf_tgt_poll_group_001", 00:09:43.514 "admin_qpairs": 1, 00:09:43.514 "io_qpairs": 223, 00:09:43.514 "current_admin_qpairs": 0, 00:09:43.514 "current_io_qpairs": 0, 00:09:43.514 "pending_bdev_io": 0, 00:09:43.514 "completed_nvme_io": 224, 00:09:43.514 "transports": [ 00:09:43.514 { 00:09:43.514 "trtype": "TCP" 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 }, 00:09:43.514 { 00:09:43.514 "name": "nvmf_tgt_poll_group_002", 00:09:43.514 "admin_qpairs": 6, 00:09:43.514 "io_qpairs": 218, 00:09:43.514 "current_admin_qpairs": 0, 00:09:43.514 "current_io_qpairs": 0, 00:09:43.514 "pending_bdev_io": 0, 00:09:43.514 "completed_nvme_io": 380, 00:09:43.514 "transports": [ 00:09:43.514 { 00:09:43.514 "trtype": "TCP" 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 }, 00:09:43.514 { 00:09:43.514 "name": "nvmf_tgt_poll_group_003", 00:09:43.514 "admin_qpairs": 0, 00:09:43.514 "io_qpairs": 224, 00:09:43.514 "current_admin_qpairs": 0, 00:09:43.514 "current_io_qpairs": 0, 00:09:43.514 "pending_bdev_io": 0, 00:09:43.514 "completed_nvme_io": 274, 00:09:43.514 "transports": [ 00:09:43.514 { 00:09:43.514 "trtype": "TCP" 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 } 00:09:43.514 ] 00:09:43.514 }' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:43.514 14:18:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.514 rmmod nvme_tcp 00:09:43.514 rmmod nvme_fabrics 00:09:43.514 rmmod nvme_keyring 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2878336 ']' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2878336 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2878336 ']' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2878336 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:43.514 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2878336 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2878336' 00:09:43.774 killing process with pid 2878336 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2878336 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2878336 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.774 14:18:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.312 14:18:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.312 00:09:46.312 real 0m37.159s 00:09:46.312 user 1m51.890s 00:09:46.312 sys 0m7.030s 00:09:46.312 14:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:46.312 14:18:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.312 ************************************ 00:09:46.312 END TEST nvmf_rpc 00:09:46.312 ************************************ 00:09:46.312 14:18:23 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:46.312 14:18:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:46.312 14:18:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:46.312 14:18:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.312 ************************************ 00:09:46.312 START TEST nvmf_invalid 00:09:46.312 ************************************ 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:46.312 * Looking for test storage... 00:09:46.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.312 14:18:23 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.313 14:18:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.922 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:52.923 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:52.923 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:52.923 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:52.923 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.923 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:53.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:09:53.186 00:09:53.186 --- 10.0.0.2 ping statistics --- 00:09:53.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.186 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:09:53.186 00:09:53.186 --- 10.0.0.1 ping statistics --- 00:09:53.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.186 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2888646 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2888646 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2888646 ']' 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:53.186 14:18:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.186 [2024-06-10 14:18:30.708678] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:09:53.186 [2024-06-10 14:18:30.708742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.186 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.447 [2024-06-10 14:18:30.796655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.447 [2024-06-10 14:18:30.876392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.447 [2024-06-10 14:18:30.876433] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.447 [2024-06-10 14:18:30.876441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.447 [2024-06-10 14:18:30.876447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.447 [2024-06-10 14:18:30.876453] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.447 [2024-06-10 14:18:30.880334] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.447 [2024-06-10 14:18:30.880398] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.447 [2024-06-10 14:18:30.880725] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.447 [2024-06-10 14:18:30.880726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.019 14:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:54.019 14:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:54.019 14:18:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.019 14:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:54.019 14:18:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11651 00:09:54.281 [2024-06-10 14:18:31.813655] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:54.281 { 00:09:54.281 "nqn": "nqn.2016-06.io.spdk:cnode11651", 00:09:54.281 "tgt_name": "foobar", 00:09:54.281 "method": "nvmf_create_subsystem", 00:09:54.281 "req_id": 1 00:09:54.281 } 00:09:54.281 Got JSON-RPC error response 00:09:54.281 response: 00:09:54.281 { 00:09:54.281 "code": -32603, 00:09:54.281 "message": "Unable to find target foobar" 00:09:54.281 }' 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:54.281 { 00:09:54.281 "nqn": "nqn.2016-06.io.spdk:cnode11651", 00:09:54.281 "tgt_name": "foobar", 00:09:54.281 "method": "nvmf_create_subsystem", 00:09:54.281 "req_id": 1 00:09:54.281 } 00:09:54.281 Got JSON-RPC error response 00:09:54.281 response: 00:09:54.281 { 00:09:54.281 "code": -32603, 00:09:54.281 "message": "Unable to find target foobar" 00:09:54.281 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:54.281 14:18:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25618 00:09:54.541 [2024-06-10 14:18:32.038433] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25618: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:54.541 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:54.541 { 00:09:54.541 "nqn": "nqn.2016-06.io.spdk:cnode25618", 00:09:54.541 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:54.541 "method": "nvmf_create_subsystem", 00:09:54.541 "req_id": 1 00:09:54.541 } 00:09:54.541 Got JSON-RPC error response 00:09:54.541 response: 00:09:54.541 { 00:09:54.541 "code": -32602, 00:09:54.541 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:54.541 }' 00:09:54.541 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:54.541 { 00:09:54.541 "nqn": "nqn.2016-06.io.spdk:cnode25618", 00:09:54.541 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:54.541 "method": "nvmf_create_subsystem", 00:09:54.541 "req_id": 1 00:09:54.541 } 00:09:54.541 Got JSON-RPC error response 00:09:54.541 response: 00:09:54.541 { 00:09:54.541 "code": -32602, 00:09:54.541 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:54.541 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:54.541 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:54.541 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23694 00:09:54.802 [2024-06-10 14:18:32.263077] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23694: invalid model number 'SPDK_Controller' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:54.802 { 00:09:54.802 "nqn": "nqn.2016-06.io.spdk:cnode23694", 00:09:54.802 "model_number": "SPDK_Controller\u001f", 00:09:54.802 "method": "nvmf_create_subsystem", 00:09:54.802 "req_id": 1 00:09:54.802 } 00:09:54.802 Got JSON-RPC error response 00:09:54.802 response: 00:09:54.802 { 00:09:54.802 "code": -32602, 00:09:54.802 "message": "Invalid MN SPDK_Controller\u001f" 00:09:54.802 }' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:54.802 { 00:09:54.802 "nqn": "nqn.2016-06.io.spdk:cnode23694", 00:09:54.802 "model_number": "SPDK_Controller\u001f", 00:09:54.802 "method": "nvmf_create_subsystem", 00:09:54.802 "req_id": 1 00:09:54.802 } 00:09:54.802 Got JSON-RPC error response 00:09:54.802 response: 00:09:54.802 { 00:09:54.802 "code": -32602, 00:09:54.802 "message": "Invalid MN SPDK_Controller\u001f" 00:09:54.802 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:54.802 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:55.063 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '{:@X*9$@IY@UsEOy!v0qi' 00:09:55.064 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '{:@X*9$@IY@UsEOy!v0qi' nqn.2016-06.io.spdk:cnode32252 00:09:55.064 [2024-06-10 14:18:32.648332] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32252: invalid serial number '{:@X*9$@IY@UsEOy!v0qi' 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:55.326 { 00:09:55.326 "nqn": "nqn.2016-06.io.spdk:cnode32252", 00:09:55.326 "serial_number": "{:@X*9$@IY@UsEOy!v0qi", 00:09:55.326 "method": "nvmf_create_subsystem", 00:09:55.326 "req_id": 1 00:09:55.326 } 00:09:55.326 Got JSON-RPC error response 00:09:55.326 response: 00:09:55.326 { 00:09:55.326 "code": -32602, 00:09:55.326 "message": "Invalid SN {:@X*9$@IY@UsEOy!v0qi" 00:09:55.326 }' 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:55.326 { 00:09:55.326 "nqn": "nqn.2016-06.io.spdk:cnode32252", 00:09:55.326 "serial_number": "{:@X*9$@IY@UsEOy!v0qi", 00:09:55.326 "method": "nvmf_create_subsystem", 00:09:55.326 "req_id": 1 00:09:55.326 } 00:09:55.326 Got JSON-RPC error response 00:09:55.326 response: 00:09:55.326 { 00:09:55.326 "code": -32602, 00:09:55.326 "message": "Invalid SN {:@X*9$@IY@UsEOy!v0qi" 00:09:55.326 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.326 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:55.327 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.328 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.589 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '~m93n7ssR#2N*U@vJvMKivO^W"PP{@9fbU<-3zVr' 00:09:55.590 14:18:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~m93n7ssR#2N*U@vJvMKivO^W"PP{@9fbU<-3zVr' nqn.2016-06.io.spdk:cnode8836 00:09:55.590 [2024-06-10 14:18:33.178053] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8836: invalid model number '~m93n7ssR#2N*U@vJvMKivO^W"PP{@9fbU<-3zVr' 00:09:55.851 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:55.851 { 00:09:55.851 "nqn": "nqn.2016-06.io.spdk:cnode8836", 00:09:55.851 "model_number": "~m93n7ssR\u007f#2N*U@vJvMKivO^W\"PP{@9fbU<-3zVr", 00:09:55.851 "method": "nvmf_create_subsystem", 00:09:55.851 "req_id": 1 00:09:55.851 } 00:09:55.851 Got JSON-RPC error response 00:09:55.851 response: 00:09:55.851 { 00:09:55.851 "code": -32602, 00:09:55.851 "message": "Invalid MN ~m93n7ssR\u007f#2N*U@vJvMKivO^W\"PP{@9fbU<-3zVr" 00:09:55.851 }' 00:09:55.851 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:55.851 { 00:09:55.851 "nqn": "nqn.2016-06.io.spdk:cnode8836", 00:09:55.851 "model_number": "~m93n7ssR\u007f#2N*U@vJvMKivO^W\"PP{@9fbU<-3zVr", 00:09:55.851 "method": "nvmf_create_subsystem", 00:09:55.851 "req_id": 1 00:09:55.851 } 00:09:55.851 Got JSON-RPC error response 00:09:55.851 response: 00:09:55.851 { 00:09:55.851 "code": -32602, 00:09:55.851 "message": "Invalid MN ~m93n7ssR\u007f#2N*U@vJvMKivO^W\"PP{@9fbU<-3zVr" 00:09:55.851 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:55.851 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:55.851 [2024-06-10 14:18:33.398790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.851 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:56.111 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:56.111 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:56.111 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:56.111 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:56.111 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:56.372 [2024-06-10 14:18:33.844201] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:56.372 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:56.372 { 00:09:56.372 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:56.372 "listen_address": { 00:09:56.372 "trtype": "tcp", 00:09:56.372 "traddr": "", 00:09:56.372 "trsvcid": "4421" 00:09:56.372 }, 00:09:56.372 "method": "nvmf_subsystem_remove_listener", 00:09:56.372 "req_id": 1 00:09:56.372 } 00:09:56.372 Got JSON-RPC error response 00:09:56.372 response: 00:09:56.372 { 00:09:56.372 "code": -32602, 00:09:56.372 "message": "Invalid parameters" 00:09:56.372 }' 00:09:56.372 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:56.372 { 00:09:56.372 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:56.372 "listen_address": { 00:09:56.372 "trtype": "tcp", 00:09:56.372 "traddr": "", 00:09:56.372 "trsvcid": "4421" 00:09:56.372 }, 00:09:56.372 "method": "nvmf_subsystem_remove_listener", 00:09:56.372 "req_id": 1 00:09:56.372 } 00:09:56.372 Got JSON-RPC error response 00:09:56.372 response: 00:09:56.372 { 00:09:56.372 "code": -32602, 00:09:56.372 "message": "Invalid parameters" 00:09:56.372 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:56.372 14:18:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25959 -i 0 00:09:56.633 [2024-06-10 14:18:34.068889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25959: invalid cntlid range [0-65519] 00:09:56.633 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:56.633 { 00:09:56.633 "nqn": "nqn.2016-06.io.spdk:cnode25959", 00:09:56.633 "min_cntlid": 0, 00:09:56.633 "method": "nvmf_create_subsystem", 00:09:56.633 "req_id": 1 00:09:56.633 } 00:09:56.633 Got JSON-RPC error response 00:09:56.633 response: 00:09:56.633 { 00:09:56.633 "code": -32602, 00:09:56.633 "message": "Invalid cntlid range [0-65519]" 00:09:56.633 }' 00:09:56.633 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:56.633 { 00:09:56.633 "nqn": "nqn.2016-06.io.spdk:cnode25959", 00:09:56.633 "min_cntlid": 0, 00:09:56.633 "method": "nvmf_create_subsystem", 00:09:56.633 "req_id": 1 00:09:56.633 } 00:09:56.633 Got JSON-RPC error response 00:09:56.633 response: 00:09:56.633 { 00:09:56.633 "code": -32602, 00:09:56.633 "message": "Invalid cntlid range [0-65519]" 00:09:56.633 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:56.633 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25074 -i 65520 00:09:56.893 [2024-06-10 14:18:34.289627] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25074: invalid cntlid range [65520-65519] 00:09:56.893 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:56.893 { 00:09:56.893 "nqn": "nqn.2016-06.io.spdk:cnode25074", 00:09:56.893 "min_cntlid": 65520, 00:09:56.893 "method": "nvmf_create_subsystem", 00:09:56.893 "req_id": 1 00:09:56.893 } 00:09:56.893 Got JSON-RPC error response 00:09:56.893 response: 00:09:56.893 { 00:09:56.893 "code": -32602, 00:09:56.893 "message": "Invalid cntlid range [65520-65519]" 00:09:56.893 }' 00:09:56.893 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:56.893 { 00:09:56.893 "nqn": "nqn.2016-06.io.spdk:cnode25074", 00:09:56.893 "min_cntlid": 65520, 00:09:56.893 "method": "nvmf_create_subsystem", 00:09:56.893 "req_id": 1 00:09:56.893 } 00:09:56.893 Got JSON-RPC error response 00:09:56.893 response: 00:09:56.893 { 00:09:56.893 "code": -32602, 00:09:56.893 "message": "Invalid cntlid range [65520-65519]" 00:09:56.893 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:56.893 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2165 -I 0 00:09:57.154 [2024-06-10 14:18:34.510368] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2165: invalid cntlid range [1-0] 00:09:57.154 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:57.154 { 00:09:57.154 "nqn": "nqn.2016-06.io.spdk:cnode2165", 00:09:57.154 "max_cntlid": 0, 00:09:57.154 "method": "nvmf_create_subsystem", 00:09:57.154 "req_id": 1 00:09:57.154 } 00:09:57.154 Got JSON-RPC error response 00:09:57.154 response: 00:09:57.154 { 00:09:57.154 "code": -32602, 00:09:57.155 "message": "Invalid cntlid range [1-0]" 00:09:57.155 }' 00:09:57.155 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:57.155 { 00:09:57.155 "nqn": "nqn.2016-06.io.spdk:cnode2165", 00:09:57.155 "max_cntlid": 0, 00:09:57.155 "method": "nvmf_create_subsystem", 00:09:57.155 "req_id": 1 00:09:57.155 } 00:09:57.155 Got JSON-RPC error response 00:09:57.155 response: 00:09:57.155 { 00:09:57.155 "code": -32602, 00:09:57.155 "message": "Invalid cntlid range [1-0]" 00:09:57.155 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.155 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17525 -I 65520 00:09:57.155 [2024-06-10 14:18:34.682937] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17525: invalid cntlid range [1-65520] 00:09:57.155 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:57.155 { 00:09:57.155 "nqn": "nqn.2016-06.io.spdk:cnode17525", 00:09:57.155 "max_cntlid": 65520, 00:09:57.155 "method": "nvmf_create_subsystem", 00:09:57.155 "req_id": 1 00:09:57.155 } 00:09:57.155 Got JSON-RPC error response 00:09:57.155 response: 00:09:57.155 { 00:09:57.155 "code": -32602, 00:09:57.155 "message": "Invalid cntlid range [1-65520]" 00:09:57.155 }' 00:09:57.155 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:57.155 { 00:09:57.155 "nqn": "nqn.2016-06.io.spdk:cnode17525", 00:09:57.155 "max_cntlid": 65520, 00:09:57.155 "method": "nvmf_create_subsystem", 00:09:57.155 "req_id": 1 00:09:57.155 } 00:09:57.155 Got JSON-RPC error response 00:09:57.155 response: 00:09:57.155 { 00:09:57.155 "code": -32602, 00:09:57.155 "message": "Invalid cntlid range [1-65520]" 00:09:57.155 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.155 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10012 -i 6 -I 5 00:09:57.415 [2024-06-10 14:18:34.899647] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10012: invalid cntlid range [6-5] 00:09:57.415 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:57.415 { 00:09:57.415 "nqn": "nqn.2016-06.io.spdk:cnode10012", 00:09:57.415 "min_cntlid": 6, 00:09:57.415 "max_cntlid": 5, 00:09:57.415 "method": "nvmf_create_subsystem", 00:09:57.415 "req_id": 1 00:09:57.415 } 00:09:57.415 Got JSON-RPC error response 00:09:57.415 response: 00:09:57.415 { 00:09:57.415 "code": -32602, 00:09:57.415 "message": "Invalid cntlid range [6-5]" 00:09:57.415 }' 00:09:57.415 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:57.415 { 00:09:57.415 "nqn": "nqn.2016-06.io.spdk:cnode10012", 00:09:57.415 "min_cntlid": 6, 00:09:57.415 "max_cntlid": 5, 00:09:57.415 "method": "nvmf_create_subsystem", 00:09:57.415 "req_id": 1 00:09:57.415 } 00:09:57.415 Got JSON-RPC error response 00:09:57.415 response: 00:09:57.415 { 00:09:57.415 "code": -32602, 00:09:57.415 "message": "Invalid cntlid range [6-5]" 00:09:57.415 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:57.415 14:18:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:57.676 { 00:09:57.676 "name": "foobar", 00:09:57.676 "method": "nvmf_delete_target", 00:09:57.676 "req_id": 1 00:09:57.676 } 00:09:57.676 Got JSON-RPC error response 00:09:57.676 response: 00:09:57.676 { 00:09:57.676 "code": -32602, 00:09:57.676 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:57.676 }' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:57.676 { 00:09:57.676 "name": "foobar", 00:09:57.676 "method": "nvmf_delete_target", 00:09:57.676 "req_id": 1 00:09:57.676 } 00:09:57.676 Got JSON-RPC error response 00:09:57.676 response: 00:09:57.676 { 00:09:57.676 "code": -32602, 00:09:57.676 "message": "The specified target doesn't exist, cannot delete it." 00:09:57.676 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.676 rmmod nvme_tcp 00:09:57.676 rmmod nvme_fabrics 00:09:57.676 rmmod nvme_keyring 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2888646 ']' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2888646 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 2888646 ']' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 2888646 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2888646 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2888646' 00:09:57.676 killing process with pid 2888646 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 2888646 00:09:57.676 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 2888646 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.939 14:18:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.886 14:18:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.886 00:09:59.886 real 0m13.943s 00:09:59.886 user 0m22.331s 00:09:59.886 sys 0m6.261s 00:09:59.886 14:18:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:59.886 14:18:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 ************************************ 00:09:59.886 END TEST nvmf_invalid 00:09:59.886 ************************************ 00:09:59.886 14:18:37 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:59.886 14:18:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:59.886 14:18:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:59.886 14:18:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.886 ************************************ 00:09:59.886 START TEST nvmf_abort 00:09:59.886 ************************************ 00:09:59.886 14:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:00.148 * Looking for test storage... 00:10:00.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.148 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.149 14:18:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:08.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:08.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.286 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:08.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:08.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:10:08.287 00:10:08.287 --- 10.0.0.2 ping statistics --- 00:10:08.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.287 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:08.287 00:10:08.287 --- 10.0.0.1 ping statistics --- 00:10:08.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.287 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2893931 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2893931 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2893931 ']' 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 [2024-06-10 14:18:44.724093] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:10:08.287 [2024-06-10 14:18:44.724152] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.287 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.287 [2024-06-10 14:18:44.792655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.287 [2024-06-10 14:18:44.866408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.287 [2024-06-10 14:18:44.866444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.287 [2024-06-10 14:18:44.866451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.287 [2024-06-10 14:18:44.866457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.287 [2024-06-10 14:18:44.866463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.287 [2024-06-10 14:18:44.866568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.287 [2024-06-10 14:18:44.866724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.287 [2024-06-10 14:18:44.866725] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 [2024-06-10 14:18:45.004570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 Malloc0 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 Delay0 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 [2024-06-10 14:18:45.080752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.287 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:08.288 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.288 14:18:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:08.288 14:18:45 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:08.288 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.288 [2024-06-10 14:18:45.252503] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:10.203 Initializing NVMe Controllers 00:10:10.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:10.203 controller IO queue size 128 less than required 00:10:10.203 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:10.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:10.203 Initialization complete. Launching workers. 00:10:10.203 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34987 00:10:10.203 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35052, failed to submit 62 00:10:10.203 success 34991, unsuccess 61, failed 0 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.203 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.204 rmmod nvme_tcp 00:10:10.204 rmmod nvme_fabrics 00:10:10.204 rmmod nvme_keyring 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2893931 ']' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2893931 ']' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2893931' 00:10:10.204 killing process with pid 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2893931 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.204 14:18:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.117 14:18:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.117 00:10:12.117 real 0m12.210s 00:10:12.117 user 0m11.809s 00:10:12.117 sys 0m6.068s 00:10:12.117 14:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:12.117 14:18:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.117 ************************************ 00:10:12.117 END TEST nvmf_abort 00:10:12.117 ************************************ 00:10:12.117 14:18:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:12.117 14:18:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:12.117 14:18:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:12.117 14:18:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.117 ************************************ 00:10:12.117 START TEST nvmf_ns_hotplug_stress 00:10:12.117 ************************************ 00:10:12.117 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:12.379 * Looking for test storage... 00:10:12.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.379 14:18:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:20.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.525 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:20.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:20.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:20.526 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:10:20.526 00:10:20.526 --- 10.0.0.2 ping statistics --- 00:10:20.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.526 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:10:20.526 00:10:20.526 --- 10.0.0.1 ping statistics --- 00:10:20.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.526 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2898611 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2898611 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2898611 ']' 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:20.526 14:18:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.526 [2024-06-10 14:18:57.016133] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:10:20.526 [2024-06-10 14:18:57.016192] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.526 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.526 [2024-06-10 14:18:57.084899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.526 [2024-06-10 14:18:57.158141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.526 [2024-06-10 14:18:57.158178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.526 [2024-06-10 14:18:57.158186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.526 [2024-06-10 14:18:57.158192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.526 [2024-06-10 14:18:57.158198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.526 [2024-06-10 14:18:57.158302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.526 [2024-06-10 14:18:57.158461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.526 [2024-06-10 14:18:57.158569] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.526 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:20.527 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:20.527 [2024-06-10 14:18:57.476901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.527 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.527 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.527 [2024-06-10 14:18:57.806214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.527 14:18:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:20.527 14:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:20.786 Malloc0 00:10:20.786 14:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:21.046 Delay0 00:10:21.046 14:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.304 14:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:21.304 NULL1 00:10:21.304 14:18:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:21.564 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:21.564 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2898980 00:10:21.564 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:21.564 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.564 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.824 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.824 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:21.824 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:22.084 [2024-06-10 14:18:59.481557] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:22.084 true 00:10:22.084 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:22.084 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.344 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.604 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:22.604 14:18:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:22.604 true 00:10:22.604 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:22.604 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.864 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.124 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:23.124 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:23.124 true 00:10:23.383 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:23.383 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.383 14:19:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.643 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:23.643 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:23.643 true 00:10:23.904 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:23.904 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.904 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.164 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:24.164 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:24.424 true 00:10:24.424 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:24.424 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.424 14:19:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.689 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:24.689 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:24.949 true 00:10:24.949 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:24.949 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.949 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.210 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:25.210 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:25.472 true 00:10:25.472 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:25.472 14:19:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.733 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.733 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:25.733 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:25.994 true 00:10:25.994 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:25.994 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.255 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.516 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:26.516 14:19:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:26.516 true 00:10:26.516 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:26.516 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.780 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.780 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:26.780 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:27.094 true 00:10:27.094 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:27.094 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.357 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.618 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:27.618 14:19:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:27.618 true 00:10:27.618 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:27.618 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.878 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.139 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:28.139 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:28.400 true 00:10:28.400 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:28.400 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.400 14:19:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.662 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:28.662 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:28.925 true 00:10:28.925 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:28.925 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.925 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.186 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:29.186 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:29.448 true 00:10:29.448 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:29.448 14:19:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.709 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.709 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:29.709 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:29.970 true 00:10:29.970 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:29.970 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.970 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.231 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:30.231 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:30.492 true 00:10:30.492 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:30.492 14:19:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.752 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.752 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:30.752 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:31.013 true 00:10:31.013 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:31.013 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.274 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.534 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:31.534 14:19:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:31.534 true 00:10:31.534 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:31.534 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.795 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.055 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:32.055 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:32.055 true 00:10:32.055 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:32.055 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.316 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.576 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:32.576 14:19:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:32.576 true 00:10:32.576 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:32.576 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.836 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.096 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:33.096 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:33.356 true 00:10:33.356 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:33.356 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.357 14:19:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.617 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:33.617 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:33.617 true 00:10:33.878 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:33.878 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.878 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.138 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:34.138 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:34.399 true 00:10:34.399 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:34.399 14:19:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.659 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.659 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:34.659 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:34.919 true 00:10:34.919 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:34.919 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.180 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.180 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:35.180 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:35.440 true 00:10:35.440 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:35.440 14:19:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.701 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.701 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:35.701 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:35.962 true 00:10:35.962 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:35.962 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.223 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.223 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:36.223 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:36.483 true 00:10:36.483 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:36.483 14:19:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.743 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.743 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:36.743 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:37.004 true 00:10:37.004 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:37.004 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.266 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.528 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:37.528 14:19:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:37.528 true 00:10:37.788 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:37.788 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.788 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.049 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:38.049 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:38.310 true 00:10:38.310 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:38.310 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.571 14:19:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.571 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:38.571 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:38.832 true 00:10:38.832 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:38.832 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.094 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.356 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:39.356 14:19:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:39.618 true 00:10:39.618 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:39.618 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.618 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.879 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:39.879 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:40.140 true 00:10:40.140 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:40.140 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.402 14:19:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.663 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:40.663 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:40.924 true 00:10:40.924 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:40.924 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.924 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.185 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:41.185 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:41.446 true 00:10:41.446 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:41.446 14:19:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.446 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.707 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:41.707 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:41.967 true 00:10:41.967 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:41.967 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.228 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.489 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:42.489 14:19:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:42.489 true 00:10:42.750 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:42.750 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.750 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.035 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:43.035 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:43.309 true 00:10:43.309 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:43.309 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.309 14:19:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.569 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:43.569 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:43.829 true 00:10:43.829 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:43.829 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.089 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.089 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:44.089 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:44.350 true 00:10:44.350 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:44.350 14:19:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.610 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.870 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:44.870 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:45.129 true 00:10:45.129 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:45.129 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.129 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.391 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:45.391 14:19:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:45.651 true 00:10:45.651 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:45.651 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.651 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.912 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:45.912 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:46.172 true 00:10:46.172 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:46.172 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.432 14:19:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.692 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:46.692 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:46.692 true 00:10:46.692 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:46.692 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.952 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.213 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:47.213 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:47.213 true 00:10:47.213 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:47.213 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.474 14:19:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.734 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:47.734 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:47.734 true 00:10:47.994 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:47.994 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.994 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.254 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:48.254 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:48.514 true 00:10:48.514 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:48.514 14:19:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.514 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.774 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:48.774 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:49.034 true 00:10:49.034 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:49.034 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.294 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.294 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:49.294 14:19:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:49.555 true 00:10:49.555 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:49.555 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.815 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.075 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:50.075 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:50.075 true 00:10:50.075 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:50.075 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.335 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.596 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:50.596 14:19:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:50.596 true 00:10:50.596 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:50.596 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.857 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.117 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:51.117 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:51.377 true 00:10:51.377 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:51.377 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.377 14:19:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.637 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:51.637 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:51.637 Initializing NVMe Controllers 00:10:51.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.637 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:51.637 Controller IO queue size 128, less than required. 00:10:51.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.637 WARNING: Some requested NVMe devices were skipped 00:10:51.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:51.637 Initialization complete. Launching workers. 00:10:51.637 ======================================================== 00:10:51.637 Latency(us) 00:10:51.637 Device Information : IOPS MiB/s Average min max 00:10:51.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21920.60 10.70 5839.46 2492.13 10159.96 00:10:51.638 ======================================================== 00:10:51.638 Total : 21920.60 10.70 5839.46 2492.13 10159.96 00:10:51.638 00:10:51.898 true 00:10:51.898 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2898980 00:10:51.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2898980) - No such process 00:10:51.898 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2898980 00:10:51.898 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.159 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.418 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:52.418 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:52.418 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:52.418 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.418 14:19:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:52.418 null0 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:52.679 null1 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.679 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:52.939 null2 00:10:52.939 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:52.939 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.939 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:53.198 null3 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:53.198 null4 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.198 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:53.509 null5 00:10:53.509 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.509 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.509 14:19:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:53.509 null6 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:53.770 null7 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:53.770 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2905515 2905517 2905518 2905520 2905522 2905524 2905526 2905528 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.771 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.031 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:54.291 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:54.551 14:19:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.551 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.551 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.551 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.551 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.552 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.812 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.073 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.334 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.596 14:19:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.596 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.856 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.857 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.117 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.118 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.378 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.379 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.640 14:19:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.640 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.900 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.901 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.901 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.901 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.901 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.162 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.422 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.423 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.423 14:19:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.423 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.683 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.684 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.945 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.206 rmmod nvme_tcp 00:10:58.206 rmmod nvme_fabrics 00:10:58.206 rmmod nvme_keyring 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2898611 ']' 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2898611 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2898611 ']' 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2898611 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2898611 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2898611' 00:10:58.206 killing process with pid 2898611 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2898611 00:10:58.206 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2898611 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.468 14:19:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.437 14:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.437 00:11:00.437 real 0m48.240s 00:11:00.437 user 3m22.025s 00:11:00.437 sys 0m16.957s 00:11:00.437 14:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:00.437 14:19:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 ************************************ 00:11:00.437 END TEST nvmf_ns_hotplug_stress 00:11:00.437 ************************************ 00:11:00.437 14:19:37 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.437 14:19:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:00.437 14:19:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:00.437 14:19:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.437 ************************************ 00:11:00.437 START TEST nvmf_connect_stress 00:11:00.437 ************************************ 00:11:00.437 14:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:00.699 * Looking for test storage... 00:11:00.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.699 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.700 14:19:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.854 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:11:08.855 00:11:08.855 --- 10.0.0.2 ping statistics --- 00:11:08.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.855 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:11:08.855 00:11:08.855 --- 10.0.0.1 ping statistics --- 00:11:08.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.855 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2910748 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2910748 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2910748 ']' 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:08.855 14:19:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 [2024-06-10 14:19:45.417558] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:11:08.855 [2024-06-10 14:19:45.417622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.855 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.855 [2024-06-10 14:19:45.489084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.855 [2024-06-10 14:19:45.564533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.856 [2024-06-10 14:19:45.564568] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.856 [2024-06-10 14:19:45.564576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.856 [2024-06-10 14:19:45.564582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.856 [2024-06-10 14:19:45.564588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.856 [2024-06-10 14:19:45.564731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.856 [2024-06-10 14:19:45.564888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.856 [2024-06-10 14:19:45.564889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 [2024-06-10 14:19:46.348971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 [2024-06-10 14:19:46.382451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.856 NULL1 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2911030 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:08.856 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.117 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.377 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.377 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:09.377 14:19:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.377 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.377 14:19:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.638 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.638 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:09.638 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.638 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.638 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.899 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:09.899 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:09.899 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.899 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:09.899 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.469 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.469 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:10.469 14:19:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.469 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.469 14:19:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.729 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.729 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:10.729 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.729 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.729 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.989 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.989 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:10.989 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.989 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.989 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.249 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.249 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:11.249 14:19:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.249 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.249 14:19:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.510 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:11.510 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:11.510 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.510 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:11.510 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.081 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.081 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:12.081 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.081 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.081 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.340 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.340 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:12.340 14:19:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.340 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.340 14:19:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.601 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.601 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:12.601 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.601 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.601 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.874 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:12.874 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:12.874 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.874 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:12.874 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.135 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.135 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:13.135 14:19:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.135 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.135 14:19:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.706 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.706 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:13.706 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.706 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.706 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.965 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:13.965 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:13.965 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.965 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:13.965 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.225 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.225 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:14.225 14:19:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.225 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.225 14:19:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.485 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.485 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:14.485 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.485 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.485 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.054 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.054 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:15.054 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.054 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.054 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.315 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.315 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:15.315 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.315 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.315 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.575 14:19:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.575 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:15.575 14:19:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.575 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.575 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.835 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:15.835 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:15.835 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.836 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:15.836 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.096 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.096 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:16.096 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.096 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.096 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.667 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.667 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:16.667 14:19:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.667 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.667 14:19:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.928 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.928 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:16.928 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.928 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.928 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.188 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.188 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:17.188 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.188 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.188 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.449 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.449 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:17.449 14:19:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.449 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.449 14:19:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.709 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:17.709 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:17.709 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.709 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:17.709 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.279 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.279 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:18.279 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.279 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.279 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.540 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.540 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:18.540 14:19:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.540 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.540 14:19:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.800 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.800 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:18.800 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.800 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.800 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.061 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2911030 00:11:19.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2911030) - No such process 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2911030 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.061 rmmod nvme_tcp 00:11:19.061 rmmod nvme_fabrics 00:11:19.061 rmmod nvme_keyring 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2910748 ']' 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2910748 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2910748 ']' 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2910748 00:11:19.061 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:11:19.062 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2910748 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2910748' 00:11:19.322 killing process with pid 2910748 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2910748 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2910748 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.322 14:19:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.868 14:19:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.868 00:11:21.868 real 0m20.884s 00:11:21.868 user 0m42.479s 00:11:21.868 sys 0m8.730s 00:11:21.868 14:19:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:21.868 14:19:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.868 ************************************ 00:11:21.868 END TEST nvmf_connect_stress 00:11:21.868 ************************************ 00:11:21.868 14:19:58 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.868 14:19:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:21.868 14:19:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:21.868 14:19:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.868 ************************************ 00:11:21.868 START TEST nvmf_fused_ordering 00:11:21.868 ************************************ 00:11:21.868 14:19:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:21.868 * Looking for test storage... 00:11:21.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:21.868 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.869 14:19:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.508 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.508 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.508 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.508 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.508 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.509 14:20:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:11:28.509 00:11:28.509 --- 10.0.0.2 ping statistics --- 00:11:28.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.509 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:11:28.509 00:11:28.509 --- 10.0.0.1 ping statistics --- 00:11:28.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.509 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.509 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2917224 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2917224 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2917224 ']' 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:28.769 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.769 [2024-06-10 14:20:06.180267] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:11:28.769 [2024-06-10 14:20:06.180341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.769 [2024-06-10 14:20:06.251467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.769 [2024-06-10 14:20:06.324573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.769 [2024-06-10 14:20:06.324612] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.769 [2024-06-10 14:20:06.324620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.769 [2024-06-10 14:20:06.324626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.769 [2024-06-10 14:20:06.324632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.769 [2024-06-10 14:20:06.324651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 [2024-06-10 14:20:06.445919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 [2024-06-10 14:20:06.470097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 NULL1 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:29.031 14:20:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:29.031 [2024-06-10 14:20:06.533945] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:11:29.031 [2024-06-10 14:20:06.533990] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917395 ] 00:11:29.031 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.602 Attached to nqn.2016-06.io.spdk:cnode1 00:11:29.602 Namespace ID: 1 size: 1GB 00:11:29.602 fused_ordering(0) 00:11:29.602 fused_ordering(1) 00:11:29.602 fused_ordering(2) 00:11:29.602 fused_ordering(3) 00:11:29.602 fused_ordering(4) 00:11:29.602 fused_ordering(5) 00:11:29.602 fused_ordering(6) 00:11:29.602 fused_ordering(7) 00:11:29.602 fused_ordering(8) 00:11:29.602 fused_ordering(9) 00:11:29.602 fused_ordering(10) 00:11:29.602 fused_ordering(11) 00:11:29.602 fused_ordering(12) 00:11:29.602 fused_ordering(13) 00:11:29.602 fused_ordering(14) 00:11:29.602 fused_ordering(15) 00:11:29.602 fused_ordering(16) 00:11:29.602 fused_ordering(17) 00:11:29.602 fused_ordering(18) 00:11:29.602 fused_ordering(19) 00:11:29.602 fused_ordering(20) 00:11:29.602 fused_ordering(21) 00:11:29.602 fused_ordering(22) 00:11:29.602 fused_ordering(23) 00:11:29.602 fused_ordering(24) 00:11:29.602 fused_ordering(25) 00:11:29.602 fused_ordering(26) 00:11:29.602 fused_ordering(27) 00:11:29.602 fused_ordering(28) 00:11:29.602 fused_ordering(29) 00:11:29.602 fused_ordering(30) 00:11:29.602 fused_ordering(31) 00:11:29.602 fused_ordering(32) 00:11:29.602 fused_ordering(33) 00:11:29.602 fused_ordering(34) 00:11:29.602 fused_ordering(35) 00:11:29.602 fused_ordering(36) 00:11:29.602 fused_ordering(37) 00:11:29.602 fused_ordering(38) 00:11:29.602 fused_ordering(39) 00:11:29.602 fused_ordering(40) 00:11:29.602 fused_ordering(41) 00:11:29.602 fused_ordering(42) 00:11:29.602 fused_ordering(43) 00:11:29.602 fused_ordering(44) 00:11:29.602 fused_ordering(45) 00:11:29.602 fused_ordering(46) 00:11:29.602 fused_ordering(47) 00:11:29.602 fused_ordering(48) 00:11:29.602 fused_ordering(49) 00:11:29.602 fused_ordering(50) 00:11:29.602 fused_ordering(51) 00:11:29.602 fused_ordering(52) 00:11:29.602 fused_ordering(53) 00:11:29.602 fused_ordering(54) 00:11:29.602 fused_ordering(55) 00:11:29.602 fused_ordering(56) 00:11:29.602 fused_ordering(57) 00:11:29.602 fused_ordering(58) 00:11:29.602 fused_ordering(59) 00:11:29.602 fused_ordering(60) 00:11:29.602 fused_ordering(61) 00:11:29.602 fused_ordering(62) 00:11:29.602 fused_ordering(63) 00:11:29.602 fused_ordering(64) 00:11:29.602 fused_ordering(65) 00:11:29.602 fused_ordering(66) 00:11:29.602 fused_ordering(67) 00:11:29.602 fused_ordering(68) 00:11:29.602 fused_ordering(69) 00:11:29.602 fused_ordering(70) 00:11:29.602 fused_ordering(71) 00:11:29.602 fused_ordering(72) 00:11:29.602 fused_ordering(73) 00:11:29.602 fused_ordering(74) 00:11:29.602 fused_ordering(75) 00:11:29.602 fused_ordering(76) 00:11:29.602 fused_ordering(77) 00:11:29.602 fused_ordering(78) 00:11:29.602 fused_ordering(79) 00:11:29.602 fused_ordering(80) 00:11:29.602 fused_ordering(81) 00:11:29.602 fused_ordering(82) 00:11:29.602 fused_ordering(83) 00:11:29.602 fused_ordering(84) 00:11:29.602 fused_ordering(85) 00:11:29.602 fused_ordering(86) 00:11:29.602 fused_ordering(87) 00:11:29.602 fused_ordering(88) 00:11:29.602 fused_ordering(89) 00:11:29.602 fused_ordering(90) 00:11:29.602 fused_ordering(91) 00:11:29.602 fused_ordering(92) 00:11:29.602 fused_ordering(93) 00:11:29.602 fused_ordering(94) 00:11:29.602 fused_ordering(95) 00:11:29.602 fused_ordering(96) 00:11:29.602 fused_ordering(97) 00:11:29.602 fused_ordering(98) 00:11:29.602 fused_ordering(99) 00:11:29.602 fused_ordering(100) 00:11:29.602 fused_ordering(101) 00:11:29.602 fused_ordering(102) 00:11:29.602 fused_ordering(103) 00:11:29.602 fused_ordering(104) 00:11:29.602 fused_ordering(105) 00:11:29.602 fused_ordering(106) 00:11:29.602 fused_ordering(107) 00:11:29.602 fused_ordering(108) 00:11:29.602 fused_ordering(109) 00:11:29.602 fused_ordering(110) 00:11:29.602 fused_ordering(111) 00:11:29.602 fused_ordering(112) 00:11:29.602 fused_ordering(113) 00:11:29.602 fused_ordering(114) 00:11:29.602 fused_ordering(115) 00:11:29.602 fused_ordering(116) 00:11:29.602 fused_ordering(117) 00:11:29.602 fused_ordering(118) 00:11:29.602 fused_ordering(119) 00:11:29.602 fused_ordering(120) 00:11:29.602 fused_ordering(121) 00:11:29.602 fused_ordering(122) 00:11:29.602 fused_ordering(123) 00:11:29.602 fused_ordering(124) 00:11:29.602 fused_ordering(125) 00:11:29.602 fused_ordering(126) 00:11:29.602 fused_ordering(127) 00:11:29.602 fused_ordering(128) 00:11:29.602 fused_ordering(129) 00:11:29.602 fused_ordering(130) 00:11:29.602 fused_ordering(131) 00:11:29.602 fused_ordering(132) 00:11:29.602 fused_ordering(133) 00:11:29.602 fused_ordering(134) 00:11:29.602 fused_ordering(135) 00:11:29.602 fused_ordering(136) 00:11:29.602 fused_ordering(137) 00:11:29.602 fused_ordering(138) 00:11:29.602 fused_ordering(139) 00:11:29.602 fused_ordering(140) 00:11:29.602 fused_ordering(141) 00:11:29.602 fused_ordering(142) 00:11:29.602 fused_ordering(143) 00:11:29.602 fused_ordering(144) 00:11:29.602 fused_ordering(145) 00:11:29.602 fused_ordering(146) 00:11:29.602 fused_ordering(147) 00:11:29.602 fused_ordering(148) 00:11:29.602 fused_ordering(149) 00:11:29.602 fused_ordering(150) 00:11:29.602 fused_ordering(151) 00:11:29.602 fused_ordering(152) 00:11:29.602 fused_ordering(153) 00:11:29.602 fused_ordering(154) 00:11:29.602 fused_ordering(155) 00:11:29.602 fused_ordering(156) 00:11:29.602 fused_ordering(157) 00:11:29.602 fused_ordering(158) 00:11:29.602 fused_ordering(159) 00:11:29.602 fused_ordering(160) 00:11:29.602 fused_ordering(161) 00:11:29.602 fused_ordering(162) 00:11:29.602 fused_ordering(163) 00:11:29.602 fused_ordering(164) 00:11:29.602 fused_ordering(165) 00:11:29.602 fused_ordering(166) 00:11:29.602 fused_ordering(167) 00:11:29.602 fused_ordering(168) 00:11:29.602 fused_ordering(169) 00:11:29.602 fused_ordering(170) 00:11:29.602 fused_ordering(171) 00:11:29.602 fused_ordering(172) 00:11:29.602 fused_ordering(173) 00:11:29.602 fused_ordering(174) 00:11:29.602 fused_ordering(175) 00:11:29.602 fused_ordering(176) 00:11:29.602 fused_ordering(177) 00:11:29.602 fused_ordering(178) 00:11:29.602 fused_ordering(179) 00:11:29.602 fused_ordering(180) 00:11:29.602 fused_ordering(181) 00:11:29.602 fused_ordering(182) 00:11:29.602 fused_ordering(183) 00:11:29.602 fused_ordering(184) 00:11:29.602 fused_ordering(185) 00:11:29.603 fused_ordering(186) 00:11:29.603 fused_ordering(187) 00:11:29.603 fused_ordering(188) 00:11:29.603 fused_ordering(189) 00:11:29.603 fused_ordering(190) 00:11:29.603 fused_ordering(191) 00:11:29.603 fused_ordering(192) 00:11:29.603 fused_ordering(193) 00:11:29.603 fused_ordering(194) 00:11:29.603 fused_ordering(195) 00:11:29.603 fused_ordering(196) 00:11:29.603 fused_ordering(197) 00:11:29.603 fused_ordering(198) 00:11:29.603 fused_ordering(199) 00:11:29.603 fused_ordering(200) 00:11:29.603 fused_ordering(201) 00:11:29.603 fused_ordering(202) 00:11:29.603 fused_ordering(203) 00:11:29.603 fused_ordering(204) 00:11:29.603 fused_ordering(205) 00:11:29.863 fused_ordering(206) 00:11:29.863 fused_ordering(207) 00:11:29.863 fused_ordering(208) 00:11:29.863 fused_ordering(209) 00:11:29.863 fused_ordering(210) 00:11:29.863 fused_ordering(211) 00:11:29.863 fused_ordering(212) 00:11:29.863 fused_ordering(213) 00:11:29.863 fused_ordering(214) 00:11:29.863 fused_ordering(215) 00:11:29.863 fused_ordering(216) 00:11:29.863 fused_ordering(217) 00:11:29.863 fused_ordering(218) 00:11:29.863 fused_ordering(219) 00:11:29.863 fused_ordering(220) 00:11:29.863 fused_ordering(221) 00:11:29.863 fused_ordering(222) 00:11:29.863 fused_ordering(223) 00:11:29.863 fused_ordering(224) 00:11:29.863 fused_ordering(225) 00:11:29.863 fused_ordering(226) 00:11:29.863 fused_ordering(227) 00:11:29.863 fused_ordering(228) 00:11:29.863 fused_ordering(229) 00:11:29.863 fused_ordering(230) 00:11:29.863 fused_ordering(231) 00:11:29.863 fused_ordering(232) 00:11:29.863 fused_ordering(233) 00:11:29.863 fused_ordering(234) 00:11:29.863 fused_ordering(235) 00:11:29.863 fused_ordering(236) 00:11:29.863 fused_ordering(237) 00:11:29.863 fused_ordering(238) 00:11:29.863 fused_ordering(239) 00:11:29.863 fused_ordering(240) 00:11:29.863 fused_ordering(241) 00:11:29.863 fused_ordering(242) 00:11:29.863 fused_ordering(243) 00:11:29.863 fused_ordering(244) 00:11:29.863 fused_ordering(245) 00:11:29.863 fused_ordering(246) 00:11:29.863 fused_ordering(247) 00:11:29.863 fused_ordering(248) 00:11:29.863 fused_ordering(249) 00:11:29.863 fused_ordering(250) 00:11:29.863 fused_ordering(251) 00:11:29.863 fused_ordering(252) 00:11:29.863 fused_ordering(253) 00:11:29.863 fused_ordering(254) 00:11:29.863 fused_ordering(255) 00:11:29.863 fused_ordering(256) 00:11:29.863 fused_ordering(257) 00:11:29.863 fused_ordering(258) 00:11:29.863 fused_ordering(259) 00:11:29.863 fused_ordering(260) 00:11:29.863 fused_ordering(261) 00:11:29.863 fused_ordering(262) 00:11:29.863 fused_ordering(263) 00:11:29.863 fused_ordering(264) 00:11:29.863 fused_ordering(265) 00:11:29.863 fused_ordering(266) 00:11:29.863 fused_ordering(267) 00:11:29.863 fused_ordering(268) 00:11:29.863 fused_ordering(269) 00:11:29.863 fused_ordering(270) 00:11:29.863 fused_ordering(271) 00:11:29.863 fused_ordering(272) 00:11:29.863 fused_ordering(273) 00:11:29.863 fused_ordering(274) 00:11:29.863 fused_ordering(275) 00:11:29.863 fused_ordering(276) 00:11:29.863 fused_ordering(277) 00:11:29.863 fused_ordering(278) 00:11:29.863 fused_ordering(279) 00:11:29.863 fused_ordering(280) 00:11:29.863 fused_ordering(281) 00:11:29.863 fused_ordering(282) 00:11:29.863 fused_ordering(283) 00:11:29.863 fused_ordering(284) 00:11:29.863 fused_ordering(285) 00:11:29.863 fused_ordering(286) 00:11:29.863 fused_ordering(287) 00:11:29.863 fused_ordering(288) 00:11:29.863 fused_ordering(289) 00:11:29.863 fused_ordering(290) 00:11:29.863 fused_ordering(291) 00:11:29.863 fused_ordering(292) 00:11:29.863 fused_ordering(293) 00:11:29.863 fused_ordering(294) 00:11:29.863 fused_ordering(295) 00:11:29.863 fused_ordering(296) 00:11:29.863 fused_ordering(297) 00:11:29.863 fused_ordering(298) 00:11:29.863 fused_ordering(299) 00:11:29.863 fused_ordering(300) 00:11:29.863 fused_ordering(301) 00:11:29.863 fused_ordering(302) 00:11:29.863 fused_ordering(303) 00:11:29.863 fused_ordering(304) 00:11:29.863 fused_ordering(305) 00:11:29.863 fused_ordering(306) 00:11:29.863 fused_ordering(307) 00:11:29.863 fused_ordering(308) 00:11:29.863 fused_ordering(309) 00:11:29.863 fused_ordering(310) 00:11:29.863 fused_ordering(311) 00:11:29.863 fused_ordering(312) 00:11:29.863 fused_ordering(313) 00:11:29.863 fused_ordering(314) 00:11:29.863 fused_ordering(315) 00:11:29.863 fused_ordering(316) 00:11:29.863 fused_ordering(317) 00:11:29.863 fused_ordering(318) 00:11:29.863 fused_ordering(319) 00:11:29.864 fused_ordering(320) 00:11:29.864 fused_ordering(321) 00:11:29.864 fused_ordering(322) 00:11:29.864 fused_ordering(323) 00:11:29.864 fused_ordering(324) 00:11:29.864 fused_ordering(325) 00:11:29.864 fused_ordering(326) 00:11:29.864 fused_ordering(327) 00:11:29.864 fused_ordering(328) 00:11:29.864 fused_ordering(329) 00:11:29.864 fused_ordering(330) 00:11:29.864 fused_ordering(331) 00:11:29.864 fused_ordering(332) 00:11:29.864 fused_ordering(333) 00:11:29.864 fused_ordering(334) 00:11:29.864 fused_ordering(335) 00:11:29.864 fused_ordering(336) 00:11:29.864 fused_ordering(337) 00:11:29.864 fused_ordering(338) 00:11:29.864 fused_ordering(339) 00:11:29.864 fused_ordering(340) 00:11:29.864 fused_ordering(341) 00:11:29.864 fused_ordering(342) 00:11:29.864 fused_ordering(343) 00:11:29.864 fused_ordering(344) 00:11:29.864 fused_ordering(345) 00:11:29.864 fused_ordering(346) 00:11:29.864 fused_ordering(347) 00:11:29.864 fused_ordering(348) 00:11:29.864 fused_ordering(349) 00:11:29.864 fused_ordering(350) 00:11:29.864 fused_ordering(351) 00:11:29.864 fused_ordering(352) 00:11:29.864 fused_ordering(353) 00:11:29.864 fused_ordering(354) 00:11:29.864 fused_ordering(355) 00:11:29.864 fused_ordering(356) 00:11:29.864 fused_ordering(357) 00:11:29.864 fused_ordering(358) 00:11:29.864 fused_ordering(359) 00:11:29.864 fused_ordering(360) 00:11:29.864 fused_ordering(361) 00:11:29.864 fused_ordering(362) 00:11:29.864 fused_ordering(363) 00:11:29.864 fused_ordering(364) 00:11:29.864 fused_ordering(365) 00:11:29.864 fused_ordering(366) 00:11:29.864 fused_ordering(367) 00:11:29.864 fused_ordering(368) 00:11:29.864 fused_ordering(369) 00:11:29.864 fused_ordering(370) 00:11:29.864 fused_ordering(371) 00:11:29.864 fused_ordering(372) 00:11:29.864 fused_ordering(373) 00:11:29.864 fused_ordering(374) 00:11:29.864 fused_ordering(375) 00:11:29.864 fused_ordering(376) 00:11:29.864 fused_ordering(377) 00:11:29.864 fused_ordering(378) 00:11:29.864 fused_ordering(379) 00:11:29.864 fused_ordering(380) 00:11:29.864 fused_ordering(381) 00:11:29.864 fused_ordering(382) 00:11:29.864 fused_ordering(383) 00:11:29.864 fused_ordering(384) 00:11:29.864 fused_ordering(385) 00:11:29.864 fused_ordering(386) 00:11:29.864 fused_ordering(387) 00:11:29.864 fused_ordering(388) 00:11:29.864 fused_ordering(389) 00:11:29.864 fused_ordering(390) 00:11:29.864 fused_ordering(391) 00:11:29.864 fused_ordering(392) 00:11:29.864 fused_ordering(393) 00:11:29.864 fused_ordering(394) 00:11:29.864 fused_ordering(395) 00:11:29.864 fused_ordering(396) 00:11:29.864 fused_ordering(397) 00:11:29.864 fused_ordering(398) 00:11:29.864 fused_ordering(399) 00:11:29.864 fused_ordering(400) 00:11:29.864 fused_ordering(401) 00:11:29.864 fused_ordering(402) 00:11:29.864 fused_ordering(403) 00:11:29.864 fused_ordering(404) 00:11:29.864 fused_ordering(405) 00:11:29.864 fused_ordering(406) 00:11:29.864 fused_ordering(407) 00:11:29.864 fused_ordering(408) 00:11:29.864 fused_ordering(409) 00:11:29.864 fused_ordering(410) 00:11:30.434 fused_ordering(411) 00:11:30.434 fused_ordering(412) 00:11:30.434 fused_ordering(413) 00:11:30.434 fused_ordering(414) 00:11:30.434 fused_ordering(415) 00:11:30.434 fused_ordering(416) 00:11:30.434 fused_ordering(417) 00:11:30.434 fused_ordering(418) 00:11:30.434 fused_ordering(419) 00:11:30.434 fused_ordering(420) 00:11:30.434 fused_ordering(421) 00:11:30.434 fused_ordering(422) 00:11:30.434 fused_ordering(423) 00:11:30.434 fused_ordering(424) 00:11:30.434 fused_ordering(425) 00:11:30.434 fused_ordering(426) 00:11:30.434 fused_ordering(427) 00:11:30.434 fused_ordering(428) 00:11:30.434 fused_ordering(429) 00:11:30.434 fused_ordering(430) 00:11:30.434 fused_ordering(431) 00:11:30.434 fused_ordering(432) 00:11:30.434 fused_ordering(433) 00:11:30.434 fused_ordering(434) 00:11:30.434 fused_ordering(435) 00:11:30.434 fused_ordering(436) 00:11:30.434 fused_ordering(437) 00:11:30.434 fused_ordering(438) 00:11:30.434 fused_ordering(439) 00:11:30.434 fused_ordering(440) 00:11:30.434 fused_ordering(441) 00:11:30.434 fused_ordering(442) 00:11:30.434 fused_ordering(443) 00:11:30.434 fused_ordering(444) 00:11:30.434 fused_ordering(445) 00:11:30.434 fused_ordering(446) 00:11:30.434 fused_ordering(447) 00:11:30.434 fused_ordering(448) 00:11:30.434 fused_ordering(449) 00:11:30.434 fused_ordering(450) 00:11:30.434 fused_ordering(451) 00:11:30.434 fused_ordering(452) 00:11:30.434 fused_ordering(453) 00:11:30.434 fused_ordering(454) 00:11:30.434 fused_ordering(455) 00:11:30.434 fused_ordering(456) 00:11:30.434 fused_ordering(457) 00:11:30.434 fused_ordering(458) 00:11:30.434 fused_ordering(459) 00:11:30.434 fused_ordering(460) 00:11:30.434 fused_ordering(461) 00:11:30.434 fused_ordering(462) 00:11:30.434 fused_ordering(463) 00:11:30.434 fused_ordering(464) 00:11:30.434 fused_ordering(465) 00:11:30.434 fused_ordering(466) 00:11:30.434 fused_ordering(467) 00:11:30.435 fused_ordering(468) 00:11:30.435 fused_ordering(469) 00:11:30.435 fused_ordering(470) 00:11:30.435 fused_ordering(471) 00:11:30.435 fused_ordering(472) 00:11:30.435 fused_ordering(473) 00:11:30.435 fused_ordering(474) 00:11:30.435 fused_ordering(475) 00:11:30.435 fused_ordering(476) 00:11:30.435 fused_ordering(477) 00:11:30.435 fused_ordering(478) 00:11:30.435 fused_ordering(479) 00:11:30.435 fused_ordering(480) 00:11:30.435 fused_ordering(481) 00:11:30.435 fused_ordering(482) 00:11:30.435 fused_ordering(483) 00:11:30.435 fused_ordering(484) 00:11:30.435 fused_ordering(485) 00:11:30.435 fused_ordering(486) 00:11:30.435 fused_ordering(487) 00:11:30.435 fused_ordering(488) 00:11:30.435 fused_ordering(489) 00:11:30.435 fused_ordering(490) 00:11:30.435 fused_ordering(491) 00:11:30.435 fused_ordering(492) 00:11:30.435 fused_ordering(493) 00:11:30.435 fused_ordering(494) 00:11:30.435 fused_ordering(495) 00:11:30.435 fused_ordering(496) 00:11:30.435 fused_ordering(497) 00:11:30.435 fused_ordering(498) 00:11:30.435 fused_ordering(499) 00:11:30.435 fused_ordering(500) 00:11:30.435 fused_ordering(501) 00:11:30.435 fused_ordering(502) 00:11:30.435 fused_ordering(503) 00:11:30.435 fused_ordering(504) 00:11:30.435 fused_ordering(505) 00:11:30.435 fused_ordering(506) 00:11:30.435 fused_ordering(507) 00:11:30.435 fused_ordering(508) 00:11:30.435 fused_ordering(509) 00:11:30.435 fused_ordering(510) 00:11:30.435 fused_ordering(511) 00:11:30.435 fused_ordering(512) 00:11:30.435 fused_ordering(513) 00:11:30.435 fused_ordering(514) 00:11:30.435 fused_ordering(515) 00:11:30.435 fused_ordering(516) 00:11:30.435 fused_ordering(517) 00:11:30.435 fused_ordering(518) 00:11:30.435 fused_ordering(519) 00:11:30.435 fused_ordering(520) 00:11:30.435 fused_ordering(521) 00:11:30.435 fused_ordering(522) 00:11:30.435 fused_ordering(523) 00:11:30.435 fused_ordering(524) 00:11:30.435 fused_ordering(525) 00:11:30.435 fused_ordering(526) 00:11:30.435 fused_ordering(527) 00:11:30.435 fused_ordering(528) 00:11:30.435 fused_ordering(529) 00:11:30.435 fused_ordering(530) 00:11:30.435 fused_ordering(531) 00:11:30.435 fused_ordering(532) 00:11:30.435 fused_ordering(533) 00:11:30.435 fused_ordering(534) 00:11:30.435 fused_ordering(535) 00:11:30.435 fused_ordering(536) 00:11:30.435 fused_ordering(537) 00:11:30.435 fused_ordering(538) 00:11:30.435 fused_ordering(539) 00:11:30.435 fused_ordering(540) 00:11:30.435 fused_ordering(541) 00:11:30.435 fused_ordering(542) 00:11:30.435 fused_ordering(543) 00:11:30.435 fused_ordering(544) 00:11:30.435 fused_ordering(545) 00:11:30.435 fused_ordering(546) 00:11:30.435 fused_ordering(547) 00:11:30.435 fused_ordering(548) 00:11:30.435 fused_ordering(549) 00:11:30.435 fused_ordering(550) 00:11:30.435 fused_ordering(551) 00:11:30.435 fused_ordering(552) 00:11:30.435 fused_ordering(553) 00:11:30.435 fused_ordering(554) 00:11:30.435 fused_ordering(555) 00:11:30.435 fused_ordering(556) 00:11:30.435 fused_ordering(557) 00:11:30.435 fused_ordering(558) 00:11:30.435 fused_ordering(559) 00:11:30.435 fused_ordering(560) 00:11:30.435 fused_ordering(561) 00:11:30.435 fused_ordering(562) 00:11:30.435 fused_ordering(563) 00:11:30.435 fused_ordering(564) 00:11:30.435 fused_ordering(565) 00:11:30.435 fused_ordering(566) 00:11:30.435 fused_ordering(567) 00:11:30.435 fused_ordering(568) 00:11:30.435 fused_ordering(569) 00:11:30.435 fused_ordering(570) 00:11:30.435 fused_ordering(571) 00:11:30.435 fused_ordering(572) 00:11:30.435 fused_ordering(573) 00:11:30.435 fused_ordering(574) 00:11:30.435 fused_ordering(575) 00:11:30.435 fused_ordering(576) 00:11:30.435 fused_ordering(577) 00:11:30.435 fused_ordering(578) 00:11:30.435 fused_ordering(579) 00:11:30.435 fused_ordering(580) 00:11:30.435 fused_ordering(581) 00:11:30.435 fused_ordering(582) 00:11:30.435 fused_ordering(583) 00:11:30.435 fused_ordering(584) 00:11:30.435 fused_ordering(585) 00:11:30.435 fused_ordering(586) 00:11:30.435 fused_ordering(587) 00:11:30.435 fused_ordering(588) 00:11:30.435 fused_ordering(589) 00:11:30.435 fused_ordering(590) 00:11:30.435 fused_ordering(591) 00:11:30.435 fused_ordering(592) 00:11:30.435 fused_ordering(593) 00:11:30.435 fused_ordering(594) 00:11:30.435 fused_ordering(595) 00:11:30.435 fused_ordering(596) 00:11:30.435 fused_ordering(597) 00:11:30.435 fused_ordering(598) 00:11:30.435 fused_ordering(599) 00:11:30.435 fused_ordering(600) 00:11:30.435 fused_ordering(601) 00:11:30.435 fused_ordering(602) 00:11:30.435 fused_ordering(603) 00:11:30.435 fused_ordering(604) 00:11:30.435 fused_ordering(605) 00:11:30.435 fused_ordering(606) 00:11:30.435 fused_ordering(607) 00:11:30.435 fused_ordering(608) 00:11:30.435 fused_ordering(609) 00:11:30.435 fused_ordering(610) 00:11:30.435 fused_ordering(611) 00:11:30.435 fused_ordering(612) 00:11:30.435 fused_ordering(613) 00:11:30.435 fused_ordering(614) 00:11:30.435 fused_ordering(615) 00:11:30.696 fused_ordering(616) 00:11:30.696 fused_ordering(617) 00:11:30.696 fused_ordering(618) 00:11:30.696 fused_ordering(619) 00:11:30.696 fused_ordering(620) 00:11:30.696 fused_ordering(621) 00:11:30.696 fused_ordering(622) 00:11:30.696 fused_ordering(623) 00:11:30.696 fused_ordering(624) 00:11:30.696 fused_ordering(625) 00:11:30.696 fused_ordering(626) 00:11:30.696 fused_ordering(627) 00:11:30.696 fused_ordering(628) 00:11:30.696 fused_ordering(629) 00:11:30.696 fused_ordering(630) 00:11:30.696 fused_ordering(631) 00:11:30.696 fused_ordering(632) 00:11:30.696 fused_ordering(633) 00:11:30.696 fused_ordering(634) 00:11:30.696 fused_ordering(635) 00:11:30.696 fused_ordering(636) 00:11:30.696 fused_ordering(637) 00:11:30.696 fused_ordering(638) 00:11:30.696 fused_ordering(639) 00:11:30.696 fused_ordering(640) 00:11:30.696 fused_ordering(641) 00:11:30.697 fused_ordering(642) 00:11:30.697 fused_ordering(643) 00:11:30.697 fused_ordering(644) 00:11:30.697 fused_ordering(645) 00:11:30.697 fused_ordering(646) 00:11:30.697 fused_ordering(647) 00:11:30.697 fused_ordering(648) 00:11:30.697 fused_ordering(649) 00:11:30.697 fused_ordering(650) 00:11:30.697 fused_ordering(651) 00:11:30.697 fused_ordering(652) 00:11:30.697 fused_ordering(653) 00:11:30.697 fused_ordering(654) 00:11:30.697 fused_ordering(655) 00:11:30.697 fused_ordering(656) 00:11:30.697 fused_ordering(657) 00:11:30.697 fused_ordering(658) 00:11:30.697 fused_ordering(659) 00:11:30.697 fused_ordering(660) 00:11:30.697 fused_ordering(661) 00:11:30.697 fused_ordering(662) 00:11:30.697 fused_ordering(663) 00:11:30.697 fused_ordering(664) 00:11:30.697 fused_ordering(665) 00:11:30.697 fused_ordering(666) 00:11:30.697 fused_ordering(667) 00:11:30.697 fused_ordering(668) 00:11:30.697 fused_ordering(669) 00:11:30.697 fused_ordering(670) 00:11:30.697 fused_ordering(671) 00:11:30.697 fused_ordering(672) 00:11:30.697 fused_ordering(673) 00:11:30.697 fused_ordering(674) 00:11:30.697 fused_ordering(675) 00:11:30.697 fused_ordering(676) 00:11:30.697 fused_ordering(677) 00:11:30.697 fused_ordering(678) 00:11:30.697 fused_ordering(679) 00:11:30.697 fused_ordering(680) 00:11:30.697 fused_ordering(681) 00:11:30.697 fused_ordering(682) 00:11:30.697 fused_ordering(683) 00:11:30.697 fused_ordering(684) 00:11:30.697 fused_ordering(685) 00:11:30.697 fused_ordering(686) 00:11:30.697 fused_ordering(687) 00:11:30.697 fused_ordering(688) 00:11:30.697 fused_ordering(689) 00:11:30.697 fused_ordering(690) 00:11:30.697 fused_ordering(691) 00:11:30.697 fused_ordering(692) 00:11:30.697 fused_ordering(693) 00:11:30.697 fused_ordering(694) 00:11:30.697 fused_ordering(695) 00:11:30.697 fused_ordering(696) 00:11:30.697 fused_ordering(697) 00:11:30.697 fused_ordering(698) 00:11:30.697 fused_ordering(699) 00:11:30.697 fused_ordering(700) 00:11:30.697 fused_ordering(701) 00:11:30.697 fused_ordering(702) 00:11:30.697 fused_ordering(703) 00:11:30.697 fused_ordering(704) 00:11:30.697 fused_ordering(705) 00:11:30.697 fused_ordering(706) 00:11:30.697 fused_ordering(707) 00:11:30.697 fused_ordering(708) 00:11:30.697 fused_ordering(709) 00:11:30.697 fused_ordering(710) 00:11:30.697 fused_ordering(711) 00:11:30.697 fused_ordering(712) 00:11:30.697 fused_ordering(713) 00:11:30.697 fused_ordering(714) 00:11:30.697 fused_ordering(715) 00:11:30.697 fused_ordering(716) 00:11:30.697 fused_ordering(717) 00:11:30.697 fused_ordering(718) 00:11:30.697 fused_ordering(719) 00:11:30.697 fused_ordering(720) 00:11:30.697 fused_ordering(721) 00:11:30.697 fused_ordering(722) 00:11:30.697 fused_ordering(723) 00:11:30.697 fused_ordering(724) 00:11:30.697 fused_ordering(725) 00:11:30.697 fused_ordering(726) 00:11:30.697 fused_ordering(727) 00:11:30.697 fused_ordering(728) 00:11:30.697 fused_ordering(729) 00:11:30.697 fused_ordering(730) 00:11:30.697 fused_ordering(731) 00:11:30.697 fused_ordering(732) 00:11:30.697 fused_ordering(733) 00:11:30.697 fused_ordering(734) 00:11:30.697 fused_ordering(735) 00:11:30.697 fused_ordering(736) 00:11:30.697 fused_ordering(737) 00:11:30.697 fused_ordering(738) 00:11:30.697 fused_ordering(739) 00:11:30.697 fused_ordering(740) 00:11:30.697 fused_ordering(741) 00:11:30.697 fused_ordering(742) 00:11:30.697 fused_ordering(743) 00:11:30.697 fused_ordering(744) 00:11:30.697 fused_ordering(745) 00:11:30.697 fused_ordering(746) 00:11:30.697 fused_ordering(747) 00:11:30.697 fused_ordering(748) 00:11:30.697 fused_ordering(749) 00:11:30.697 fused_ordering(750) 00:11:30.697 fused_ordering(751) 00:11:30.697 fused_ordering(752) 00:11:30.697 fused_ordering(753) 00:11:30.697 fused_ordering(754) 00:11:30.697 fused_ordering(755) 00:11:30.697 fused_ordering(756) 00:11:30.697 fused_ordering(757) 00:11:30.697 fused_ordering(758) 00:11:30.697 fused_ordering(759) 00:11:30.697 fused_ordering(760) 00:11:30.697 fused_ordering(761) 00:11:30.697 fused_ordering(762) 00:11:30.697 fused_ordering(763) 00:11:30.697 fused_ordering(764) 00:11:30.697 fused_ordering(765) 00:11:30.697 fused_ordering(766) 00:11:30.697 fused_ordering(767) 00:11:30.697 fused_ordering(768) 00:11:30.697 fused_ordering(769) 00:11:30.697 fused_ordering(770) 00:11:30.697 fused_ordering(771) 00:11:30.697 fused_ordering(772) 00:11:30.697 fused_ordering(773) 00:11:30.697 fused_ordering(774) 00:11:30.697 fused_ordering(775) 00:11:30.697 fused_ordering(776) 00:11:30.697 fused_ordering(777) 00:11:30.697 fused_ordering(778) 00:11:30.697 fused_ordering(779) 00:11:30.697 fused_ordering(780) 00:11:30.697 fused_ordering(781) 00:11:30.697 fused_ordering(782) 00:11:30.697 fused_ordering(783) 00:11:30.697 fused_ordering(784) 00:11:30.697 fused_ordering(785) 00:11:30.697 fused_ordering(786) 00:11:30.697 fused_ordering(787) 00:11:30.697 fused_ordering(788) 00:11:30.697 fused_ordering(789) 00:11:30.697 fused_ordering(790) 00:11:30.697 fused_ordering(791) 00:11:30.697 fused_ordering(792) 00:11:30.697 fused_ordering(793) 00:11:30.697 fused_ordering(794) 00:11:30.697 fused_ordering(795) 00:11:30.697 fused_ordering(796) 00:11:30.697 fused_ordering(797) 00:11:30.697 fused_ordering(798) 00:11:30.697 fused_ordering(799) 00:11:30.697 fused_ordering(800) 00:11:30.697 fused_ordering(801) 00:11:30.697 fused_ordering(802) 00:11:30.697 fused_ordering(803) 00:11:30.697 fused_ordering(804) 00:11:30.697 fused_ordering(805) 00:11:30.697 fused_ordering(806) 00:11:30.697 fused_ordering(807) 00:11:30.697 fused_ordering(808) 00:11:30.697 fused_ordering(809) 00:11:30.697 fused_ordering(810) 00:11:30.697 fused_ordering(811) 00:11:30.697 fused_ordering(812) 00:11:30.697 fused_ordering(813) 00:11:30.697 fused_ordering(814) 00:11:30.697 fused_ordering(815) 00:11:30.697 fused_ordering(816) 00:11:30.697 fused_ordering(817) 00:11:30.697 fused_ordering(818) 00:11:30.697 fused_ordering(819) 00:11:30.697 fused_ordering(820) 00:11:31.268 fused_ordering(821) 00:11:31.268 fused_ordering(822) 00:11:31.268 fused_ordering(823) 00:11:31.268 fused_ordering(824) 00:11:31.268 fused_ordering(825) 00:11:31.268 fused_ordering(826) 00:11:31.268 fused_ordering(827) 00:11:31.268 fused_ordering(828) 00:11:31.268 fused_ordering(829) 00:11:31.268 fused_ordering(830) 00:11:31.268 fused_ordering(831) 00:11:31.268 fused_ordering(832) 00:11:31.268 fused_ordering(833) 00:11:31.268 fused_ordering(834) 00:11:31.268 fused_ordering(835) 00:11:31.268 fused_ordering(836) 00:11:31.268 fused_ordering(837) 00:11:31.268 fused_ordering(838) 00:11:31.268 fused_ordering(839) 00:11:31.268 fused_ordering(840) 00:11:31.268 fused_ordering(841) 00:11:31.268 fused_ordering(842) 00:11:31.268 fused_ordering(843) 00:11:31.268 fused_ordering(844) 00:11:31.268 fused_ordering(845) 00:11:31.268 fused_ordering(846) 00:11:31.268 fused_ordering(847) 00:11:31.268 fused_ordering(848) 00:11:31.268 fused_ordering(849) 00:11:31.268 fused_ordering(850) 00:11:31.268 fused_ordering(851) 00:11:31.268 fused_ordering(852) 00:11:31.268 fused_ordering(853) 00:11:31.268 fused_ordering(854) 00:11:31.268 fused_ordering(855) 00:11:31.268 fused_ordering(856) 00:11:31.268 fused_ordering(857) 00:11:31.268 fused_ordering(858) 00:11:31.268 fused_ordering(859) 00:11:31.268 fused_ordering(860) 00:11:31.268 fused_ordering(861) 00:11:31.268 fused_ordering(862) 00:11:31.268 fused_ordering(863) 00:11:31.268 fused_ordering(864) 00:11:31.268 fused_ordering(865) 00:11:31.268 fused_ordering(866) 00:11:31.268 fused_ordering(867) 00:11:31.268 fused_ordering(868) 00:11:31.268 fused_ordering(869) 00:11:31.268 fused_ordering(870) 00:11:31.268 fused_ordering(871) 00:11:31.268 fused_ordering(872) 00:11:31.268 fused_ordering(873) 00:11:31.268 fused_ordering(874) 00:11:31.268 fused_ordering(875) 00:11:31.268 fused_ordering(876) 00:11:31.268 fused_ordering(877) 00:11:31.268 fused_ordering(878) 00:11:31.268 fused_ordering(879) 00:11:31.268 fused_ordering(880) 00:11:31.268 fused_ordering(881) 00:11:31.268 fused_ordering(882) 00:11:31.268 fused_ordering(883) 00:11:31.268 fused_ordering(884) 00:11:31.268 fused_ordering(885) 00:11:31.268 fused_ordering(886) 00:11:31.268 fused_ordering(887) 00:11:31.268 fused_ordering(888) 00:11:31.268 fused_ordering(889) 00:11:31.268 fused_ordering(890) 00:11:31.268 fused_ordering(891) 00:11:31.268 fused_ordering(892) 00:11:31.268 fused_ordering(893) 00:11:31.268 fused_ordering(894) 00:11:31.268 fused_ordering(895) 00:11:31.269 fused_ordering(896) 00:11:31.269 fused_ordering(897) 00:11:31.269 fused_ordering(898) 00:11:31.269 fused_ordering(899) 00:11:31.269 fused_ordering(900) 00:11:31.269 fused_ordering(901) 00:11:31.269 fused_ordering(902) 00:11:31.269 fused_ordering(903) 00:11:31.269 fused_ordering(904) 00:11:31.269 fused_ordering(905) 00:11:31.269 fused_ordering(906) 00:11:31.269 fused_ordering(907) 00:11:31.269 fused_ordering(908) 00:11:31.269 fused_ordering(909) 00:11:31.269 fused_ordering(910) 00:11:31.269 fused_ordering(911) 00:11:31.269 fused_ordering(912) 00:11:31.269 fused_ordering(913) 00:11:31.269 fused_ordering(914) 00:11:31.269 fused_ordering(915) 00:11:31.269 fused_ordering(916) 00:11:31.269 fused_ordering(917) 00:11:31.269 fused_ordering(918) 00:11:31.269 fused_ordering(919) 00:11:31.269 fused_ordering(920) 00:11:31.269 fused_ordering(921) 00:11:31.269 fused_ordering(922) 00:11:31.269 fused_ordering(923) 00:11:31.269 fused_ordering(924) 00:11:31.269 fused_ordering(925) 00:11:31.269 fused_ordering(926) 00:11:31.269 fused_ordering(927) 00:11:31.269 fused_ordering(928) 00:11:31.269 fused_ordering(929) 00:11:31.269 fused_ordering(930) 00:11:31.269 fused_ordering(931) 00:11:31.269 fused_ordering(932) 00:11:31.269 fused_ordering(933) 00:11:31.269 fused_ordering(934) 00:11:31.269 fused_ordering(935) 00:11:31.269 fused_ordering(936) 00:11:31.269 fused_ordering(937) 00:11:31.269 fused_ordering(938) 00:11:31.269 fused_ordering(939) 00:11:31.269 fused_ordering(940) 00:11:31.269 fused_ordering(941) 00:11:31.269 fused_ordering(942) 00:11:31.269 fused_ordering(943) 00:11:31.269 fused_ordering(944) 00:11:31.269 fused_ordering(945) 00:11:31.269 fused_ordering(946) 00:11:31.269 fused_ordering(947) 00:11:31.269 fused_ordering(948) 00:11:31.269 fused_ordering(949) 00:11:31.269 fused_ordering(950) 00:11:31.269 fused_ordering(951) 00:11:31.269 fused_ordering(952) 00:11:31.269 fused_ordering(953) 00:11:31.269 fused_ordering(954) 00:11:31.269 fused_ordering(955) 00:11:31.269 fused_ordering(956) 00:11:31.269 fused_ordering(957) 00:11:31.269 fused_ordering(958) 00:11:31.269 fused_ordering(959) 00:11:31.269 fused_ordering(960) 00:11:31.269 fused_ordering(961) 00:11:31.269 fused_ordering(962) 00:11:31.269 fused_ordering(963) 00:11:31.269 fused_ordering(964) 00:11:31.269 fused_ordering(965) 00:11:31.269 fused_ordering(966) 00:11:31.269 fused_ordering(967) 00:11:31.269 fused_ordering(968) 00:11:31.269 fused_ordering(969) 00:11:31.269 fused_ordering(970) 00:11:31.269 fused_ordering(971) 00:11:31.269 fused_ordering(972) 00:11:31.269 fused_ordering(973) 00:11:31.269 fused_ordering(974) 00:11:31.269 fused_ordering(975) 00:11:31.269 fused_ordering(976) 00:11:31.269 fused_ordering(977) 00:11:31.269 fused_ordering(978) 00:11:31.269 fused_ordering(979) 00:11:31.269 fused_ordering(980) 00:11:31.269 fused_ordering(981) 00:11:31.269 fused_ordering(982) 00:11:31.269 fused_ordering(983) 00:11:31.269 fused_ordering(984) 00:11:31.269 fused_ordering(985) 00:11:31.269 fused_ordering(986) 00:11:31.269 fused_ordering(987) 00:11:31.269 fused_ordering(988) 00:11:31.269 fused_ordering(989) 00:11:31.269 fused_ordering(990) 00:11:31.269 fused_ordering(991) 00:11:31.269 fused_ordering(992) 00:11:31.269 fused_ordering(993) 00:11:31.269 fused_ordering(994) 00:11:31.269 fused_ordering(995) 00:11:31.269 fused_ordering(996) 00:11:31.269 fused_ordering(997) 00:11:31.269 fused_ordering(998) 00:11:31.269 fused_ordering(999) 00:11:31.269 fused_ordering(1000) 00:11:31.269 fused_ordering(1001) 00:11:31.269 fused_ordering(1002) 00:11:31.269 fused_ordering(1003) 00:11:31.269 fused_ordering(1004) 00:11:31.269 fused_ordering(1005) 00:11:31.269 fused_ordering(1006) 00:11:31.269 fused_ordering(1007) 00:11:31.269 fused_ordering(1008) 00:11:31.269 fused_ordering(1009) 00:11:31.269 fused_ordering(1010) 00:11:31.269 fused_ordering(1011) 00:11:31.269 fused_ordering(1012) 00:11:31.269 fused_ordering(1013) 00:11:31.269 fused_ordering(1014) 00:11:31.269 fused_ordering(1015) 00:11:31.269 fused_ordering(1016) 00:11:31.269 fused_ordering(1017) 00:11:31.269 fused_ordering(1018) 00:11:31.269 fused_ordering(1019) 00:11:31.269 fused_ordering(1020) 00:11:31.269 fused_ordering(1021) 00:11:31.269 fused_ordering(1022) 00:11:31.269 fused_ordering(1023) 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.269 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.269 rmmod nvme_tcp 00:11:31.531 rmmod nvme_fabrics 00:11:31.531 rmmod nvme_keyring 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2917224 ']' 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2917224 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2917224 ']' 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2917224 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2917224 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2917224' 00:11:31.531 killing process with pid 2917224 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2917224 00:11:31.531 14:20:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2917224 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.531 14:20:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.076 14:20:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.076 00:11:34.076 real 0m12.190s 00:11:34.076 user 0m6.315s 00:11:34.076 sys 0m6.455s 00:11:34.076 14:20:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:34.076 14:20:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.076 ************************************ 00:11:34.076 END TEST nvmf_fused_ordering 00:11:34.076 ************************************ 00:11:34.076 14:20:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:34.076 14:20:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:34.076 14:20:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:34.076 14:20:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.076 ************************************ 00:11:34.076 START TEST nvmf_delete_subsystem 00:11:34.076 ************************************ 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:34.076 * Looking for test storage... 00:11:34.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.076 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.077 14:20:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.663 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:40.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:40.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:40.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:40.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.664 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.925 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.925 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.925 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:11:40.926 00:11:40.926 --- 10.0.0.2 ping statistics --- 00:11:40.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.926 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:11:40.926 00:11:40.926 --- 10.0.0.1 ping statistics --- 00:11:40.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.926 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2921982 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2921982 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2921982 ']' 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.926 14:20:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.926 [2024-06-10 14:20:18.483255] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:11:40.926 [2024-06-10 14:20:18.483329] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.926 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.186 [2024-06-10 14:20:18.569575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.186 [2024-06-10 14:20:18.664295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.186 [2024-06-10 14:20:18.664358] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.186 [2024-06-10 14:20:18.664367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.186 [2024-06-10 14:20:18.664375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.186 [2024-06-10 14:20:18.664381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.186 [2024-06-10 14:20:18.664556] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.186 [2024-06-10 14:20:18.664676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.756 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:41.756 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:11:41.756 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.756 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:41.756 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 [2024-06-10 14:20:19.390714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 [2024-06-10 14:20:19.414858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 NULL1 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 Delay0 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2922083 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:42.017 14:20:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:42.017 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.017 [2024-06-10 14:20:19.511492] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:43.930 14:20:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.930 14:20:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:43.930 14:20:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Write completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.191 starting I/O failed: -6 00:11:44.191 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 [2024-06-10 14:20:21.675394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe040 is same with the state(5) to be set 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 Write completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.192 starting I/O failed: -6 00:11:44.192 Read completed with error (sct=0, sc=8) 00:11:44.193 Write completed with error (sct=0, sc=8) 00:11:44.193 starting I/O failed: -6 00:11:44.193 Write completed with error (sct=0, sc=8) 00:11:44.193 Read completed with error (sct=0, sc=8) 00:11:44.193 starting I/O failed: -6 00:11:44.193 Write completed with error (sct=0, sc=8) 00:11:44.193 starting I/O failed: -6 00:11:44.193 starting I/O failed: -6 00:11:44.193 starting I/O failed: -6 00:11:44.193 starting I/O failed: -6 00:11:44.193 starting I/O failed: -6 00:11:44.193 starting I/O failed: -6 00:11:45.131 [2024-06-10 14:20:22.652420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9d550 is same with the state(5) to be set 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 [2024-06-10 14:20:22.678557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbde60 is same with the state(5) to be set 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 [2024-06-10 14:20:22.679055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe220 is same with the state(5) to be set 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Write completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.131 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 [2024-06-10 14:20:22.682111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe5f400cfe0 is same with the state(5) to be set 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 Write completed with error (sct=0, sc=8) 00:11:45.132 Read completed with error (sct=0, sc=8) 00:11:45.132 [2024-06-10 14:20:22.682301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe5f400d740 is same with the state(5) to be set 00:11:45.132 Initializing NVMe Controllers 00:11:45.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:45.132 Controller IO queue size 128, less than required. 00:11:45.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:45.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:45.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:45.132 Initialization complete. Launching workers. 00:11:45.132 ======================================================== 00:11:45.132 Latency(us) 00:11:45.132 Device Information : IOPS MiB/s Average min max 00:11:45.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.93 0.08 901330.03 287.24 1005257.32 00:11:45.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.88 0.09 911036.99 409.93 1009504.18 00:11:45.132 ======================================================== 00:11:45.132 Total : 349.81 0.17 906404.75 287.24 1009504.18 00:11:45.132 00:11:45.132 [2024-06-10 14:20:22.682964] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9d550 (9): Bad file descriptor 00:11:45.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:45.132 14:20:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.132 14:20:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:45.132 14:20:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2922083 00:11:45.132 14:20:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2922083 00:11:45.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2922083) - No such process 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2922083 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2922083 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2922083 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.703 [2024-06-10 14:20:23.212114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2922856 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:45.703 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.703 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.703 [2024-06-10 14:20:23.282730] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:46.274 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.274 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:46.274 14:20:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.844 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.844 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:46.844 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.414 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.415 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:47.415 14:20:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.676 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.676 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:47.676 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.247 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.247 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:48.247 14:20:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.818 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.818 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:48.818 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.095 Initializing NVMe Controllers 00:11:49.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.095 Controller IO queue size 128, less than required. 00:11:49.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:49.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:49.095 Initialization complete. Launching workers. 00:11:49.095 ======================================================== 00:11:49.095 Latency(us) 00:11:49.095 Device Information : IOPS MiB/s Average min max 00:11:49.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003397.11 1000202.94 1042929.60 00:11:49.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004761.97 1000686.37 1041774.15 00:11:49.095 ======================================================== 00:11:49.095 Total : 256.00 0.12 1004079.54 1000202.94 1042929.60 00:11:49.095 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2922856 00:11:49.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2922856) - No such process 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2922856 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.408 rmmod nvme_tcp 00:11:49.408 rmmod nvme_fabrics 00:11:49.408 rmmod nvme_keyring 00:11:49.408 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2921982 ']' 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2921982 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2921982 ']' 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2921982 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2921982 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2921982' 00:11:49.409 killing process with pid 2921982 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2921982 00:11:49.409 14:20:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2921982 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.669 14:20:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.580 14:20:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.580 00:11:51.580 real 0m17.839s 00:11:51.580 user 0m30.929s 00:11:51.580 sys 0m6.284s 00:11:51.580 14:20:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:51.580 14:20:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.580 ************************************ 00:11:51.580 END TEST nvmf_delete_subsystem 00:11:51.580 ************************************ 00:11:51.580 14:20:29 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:51.580 14:20:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:51.580 14:20:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:51.580 14:20:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.580 ************************************ 00:11:51.580 START TEST nvmf_ns_masking 00:11:51.580 ************************************ 00:11:51.580 14:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:51.840 * Looking for test storage... 00:11:51.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.840 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=dc3a83ae-4788-483b-92a9-9359f1d706a6 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.841 14:20:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:58.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:58.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:58.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:58.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.429 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.690 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:11:58.951 00:11:58.951 --- 10.0.0.2 ping statistics --- 00:11:58.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.951 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:58.951 00:11:58.951 --- 10.0.0.1 ping statistics --- 00:11:58.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.951 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2927754 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2927754 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2927754 ']' 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:58.951 14:20:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:58.951 [2024-06-10 14:20:36.397551] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:11:58.951 [2024-06-10 14:20:36.397616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.951 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.951 [2024-06-10 14:20:36.484296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.212 [2024-06-10 14:20:36.581303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.212 [2024-06-10 14:20:36.581365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.212 [2024-06-10 14:20:36.581373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.212 [2024-06-10 14:20:36.581380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.212 [2024-06-10 14:20:36.581386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.212 [2024-06-10 14:20:36.581520] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.212 [2024-06-10 14:20:36.581664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.212 [2024-06-10 14:20:36.581827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.212 [2024-06-10 14:20:36.581828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.783 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.045 [2024-06-10 14:20:37.513704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.045 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:00.045 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:00.045 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:00.305 Malloc1 00:12:00.305 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.566 Malloc2 00:12:00.566 14:20:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.826 14:20:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.087 14:20:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.087 [2024-06-10 14:20:38.602771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.087 14:20:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:01.087 14:20:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc3a83ae-4788-483b-92a9-9359f1d706a6 -a 10.0.0.2 -s 4420 -i 4 00:12:01.348 14:20:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.348 14:20:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:01.348 14:20:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.348 14:20:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:01.348 14:20:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:03.264 [ 0]:0x1 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.264 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.525 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2aed4a12f1134d8fa40495011ef98828 00:12:03.526 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2aed4a12f1134d8fa40495011ef98828 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.526 14:20:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:03.526 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:03.526 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.526 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:03.526 [ 0]:0x1 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2aed4a12f1134d8fa40495011ef98828 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2aed4a12f1134d8fa40495011ef98828 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:03.787 [ 1]:0x2 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.787 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.048 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc3a83ae-4788-483b-92a9-9359f1d706a6 -a 10.0.0.2 -s 4420 -i 4 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:12:04.308 14:20:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.848 14:20:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:06.848 [ 0]:0x2 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.848 [ 0]:0x1 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2aed4a12f1134d8fa40495011ef98828 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2aed4a12f1134d8fa40495011ef98828 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.848 [ 1]:0x2 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.848 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:07.108 [ 0]:0x2 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.108 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:07.367 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:07.367 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.367 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:07.367 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.367 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.625 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:07.625 14:20:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dc3a83ae-4788-483b-92a9-9359f1d706a6 -a 10.0.0.2 -s 4420 -i 4 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:07.625 14:20:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:09.534 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:09.796 [ 0]:0x1 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2aed4a12f1134d8fa40495011ef98828 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2aed4a12f1134d8fa40495011ef98828 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:09.796 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.057 [ 1]:0x2 00:12:10.057 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.057 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.057 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:10.057 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.057 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.318 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.319 [ 0]:0x2 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:10.319 14:20:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:10.580 [2024-06-10 14:20:48.004886] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:10.580 request: 00:12:10.580 { 00:12:10.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.580 "nsid": 2, 00:12:10.580 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.580 "method": "nvmf_ns_remove_host", 00:12:10.580 "req_id": 1 00:12:10.580 } 00:12:10.580 Got JSON-RPC error response 00:12:10.580 response: 00:12:10.580 { 00:12:10.580 "code": -32602, 00:12:10.580 "message": "Invalid parameters" 00:12:10.580 } 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.580 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.581 [ 0]:0x2 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=48e73812915842549140b7d145083d8d 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 48e73812915842549140b7d145083d8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:10.581 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.842 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.842 rmmod nvme_tcp 00:12:10.842 rmmod nvme_fabrics 00:12:10.842 rmmod nvme_keyring 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2927754 ']' 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2927754 00:12:11.102 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2927754 ']' 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2927754 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2927754 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2927754' 00:12:11.103 killing process with pid 2927754 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2927754 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2927754 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.103 14:20:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.711 14:20:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.711 00:12:13.711 real 0m21.571s 00:12:13.711 user 0m53.474s 00:12:13.711 sys 0m6.822s 00:12:13.711 14:20:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:13.711 14:20:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:13.711 ************************************ 00:12:13.711 END TEST nvmf_ns_masking 00:12:13.711 ************************************ 00:12:13.712 14:20:50 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:13.712 14:20:50 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.712 14:20:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:13.712 14:20:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:13.712 14:20:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.712 ************************************ 00:12:13.712 START TEST nvmf_nvme_cli 00:12:13.712 ************************************ 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:13.712 * Looking for test storage... 00:12:13.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:13.712 14:20:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:20.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:20.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:20.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:20.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.303 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.304 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:12:20.564 00:12:20.564 --- 10.0.0.2 ping statistics --- 00:12:20.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.564 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:12:20.564 00:12:20.564 --- 10.0.0.1 ping statistics --- 00:12:20.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.564 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.564 14:20:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2934565 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2934565 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2934565 ']' 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:20.564 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.564 [2024-06-10 14:20:58.084866] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:12:20.564 [2024-06-10 14:20:58.084914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.564 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.825 [2024-06-10 14:20:58.168779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.825 [2024-06-10 14:20:58.250526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.825 [2024-06-10 14:20:58.250585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.825 [2024-06-10 14:20:58.250592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.826 [2024-06-10 14:20:58.250599] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.826 [2024-06-10 14:20:58.250605] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.826 [2024-06-10 14:20:58.250736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.826 [2024-06-10 14:20:58.250881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.826 [2024-06-10 14:20:58.251049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.826 [2024-06-10 14:20:58.251050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.398 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:21.398 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:12:21.398 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.398 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:21.398 14:20:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 14:20:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 [2024-06-10 14:20:59.007135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 Malloc0 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 Malloc1 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 [2024-06-10 14:20:59.094501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.660 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:21.661 00:12:21.661 Discovery Log Number of Records 2, Generation counter 2 00:12:21.661 =====Discovery Log Entry 0====== 00:12:21.661 trtype: tcp 00:12:21.661 adrfam: ipv4 00:12:21.661 subtype: current discovery subsystem 00:12:21.661 treq: not required 00:12:21.661 portid: 0 00:12:21.661 trsvcid: 4420 00:12:21.661 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.661 traddr: 10.0.0.2 00:12:21.661 eflags: explicit discovery connections, duplicate discovery information 00:12:21.661 sectype: none 00:12:21.661 =====Discovery Log Entry 1====== 00:12:21.661 trtype: tcp 00:12:21.661 adrfam: ipv4 00:12:21.661 subtype: nvme subsystem 00:12:21.661 treq: not required 00:12:21.661 portid: 0 00:12:21.661 trsvcid: 4420 00:12:21.661 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:21.661 traddr: 10.0.0.2 00:12:21.661 eflags: none 00:12:21.661 sectype: none 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:21.661 14:20:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:23.575 14:21:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.491 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:25.492 /dev/nvme0n1 ]] 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.492 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:25.754 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.015 rmmod nvme_tcp 00:12:26.015 rmmod nvme_fabrics 00:12:26.015 rmmod nvme_keyring 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2934565 ']' 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2934565 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2934565 ']' 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2934565 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2934565 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2934565' 00:12:26.015 killing process with pid 2934565 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2934565 00:12:26.015 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2934565 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.276 14:21:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.190 14:21:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.190 00:12:28.190 real 0m14.898s 00:12:28.190 user 0m23.670s 00:12:28.190 sys 0m5.812s 00:12:28.190 14:21:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.190 14:21:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:28.190 ************************************ 00:12:28.190 END TEST nvmf_nvme_cli 00:12:28.190 ************************************ 00:12:28.190 14:21:05 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:28.190 14:21:05 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.190 14:21:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:28.190 14:21:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:28.190 14:21:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.453 ************************************ 00:12:28.453 START TEST nvmf_vfio_user 00:12:28.453 ************************************ 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.453 * Looking for test storage... 00:12:28.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2936256 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2936256' 00:12:28.453 Process pid: 2936256 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2936256 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2936256 ']' 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:28.453 14:21:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:28.453 [2024-06-10 14:21:05.994594] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:12:28.453 [2024-06-10 14:21:05.994660] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.453 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.715 [2024-06-10 14:21:06.076661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.715 [2024-06-10 14:21:06.146121] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.715 [2024-06-10 14:21:06.146156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.715 [2024-06-10 14:21:06.146163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.715 [2024-06-10 14:21:06.146173] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.715 [2024-06-10 14:21:06.146178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.715 [2024-06-10 14:21:06.146217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.715 [2024-06-10 14:21:06.146344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.715 [2024-06-10 14:21:06.146516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.715 [2024-06-10 14:21:06.146517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.286 14:21:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:29.286 14:21:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:29.286 14:21:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:30.672 14:21:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:30.672 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:30.672 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:30.672 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.672 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:30.672 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.933 Malloc1 00:12:30.933 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:31.193 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:31.193 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.454 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.454 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.454 14:21:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:31.715 Malloc2 00:12:31.715 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:31.976 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.236 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.498 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.498 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.498 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.498 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.498 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.499 14:21:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.499 [2024-06-10 14:21:09.898166] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:12:32.499 [2024-06-10 14:21:09.898208] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937421 ] 00:12:32.499 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.499 [2024-06-10 14:21:09.928938] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.499 [2024-06-10 14:21:09.934275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.499 [2024-06-10 14:21:09.934295] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9adf9d5000 00:12:32.499 [2024-06-10 14:21:09.935281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.936277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.937275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.938285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.939287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.940294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.941296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.942298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.499 [2024-06-10 14:21:09.943311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.499 [2024-06-10 14:21:09.943326] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9adf9ca000 00:12:32.499 [2024-06-10 14:21:09.944653] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.499 [2024-06-10 14:21:09.966469] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.499 [2024-06-10 14:21:09.966492] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:32.499 [2024-06-10 14:21:09.969485] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.499 [2024-06-10 14:21:09.969530] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.499 [2024-06-10 14:21:09.969613] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:32.499 [2024-06-10 14:21:09.969630] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:32.499 [2024-06-10 14:21:09.969636] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:32.499 [2024-06-10 14:21:09.970487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.499 [2024-06-10 14:21:09.970498] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:32.499 [2024-06-10 14:21:09.970508] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:32.499 [2024-06-10 14:21:09.971494] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.499 [2024-06-10 14:21:09.971506] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:32.499 [2024-06-10 14:21:09.971513] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.972499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.499 [2024-06-10 14:21:09.972507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.973507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.499 [2024-06-10 14:21:09.973515] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:32.499 [2024-06-10 14:21:09.973520] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.973526] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.973632] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:32.499 [2024-06-10 14:21:09.973637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.973641] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.499 [2024-06-10 14:21:09.974516] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.499 [2024-06-10 14:21:09.975517] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.499 [2024-06-10 14:21:09.976527] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.499 [2024-06-10 14:21:09.977530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.499 [2024-06-10 14:21:09.977607] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.499 [2024-06-10 14:21:09.978547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.499 [2024-06-10 14:21:09.978554] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.499 [2024-06-10 14:21:09.978559] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978580] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:32.499 [2024-06-10 14:21:09.978587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978603] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.499 [2024-06-10 14:21:09.978608] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.499 [2024-06-10 14:21:09.978625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.499 [2024-06-10 14:21:09.978661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.499 [2024-06-10 14:21:09.978671] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:32.499 [2024-06-10 14:21:09.978676] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:32.499 [2024-06-10 14:21:09.978682] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:32.499 [2024-06-10 14:21:09.978687] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.499 [2024-06-10 14:21:09.978691] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:32.499 [2024-06-10 14:21:09.978696] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:32.499 [2024-06-10 14:21:09.978700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978708] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.499 [2024-06-10 14:21:09.978733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.499 [2024-06-10 14:21:09.978743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.499 [2024-06-10 14:21:09.978751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.499 [2024-06-10 14:21:09.978759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.499 [2024-06-10 14:21:09.978767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.499 [2024-06-10 14:21:09.978772] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.499 [2024-06-10 14:21:09.978800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.499 [2024-06-10 14:21:09.978806] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:32.499 [2024-06-10 14:21:09.978811] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978817] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978824] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.499 [2024-06-10 14:21:09.978833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.978844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.978892] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.978900] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.978908] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.500 [2024-06-10 14:21:09.978912] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.500 [2024-06-10 14:21:09.978918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.978944] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:32.500 [2024-06-10 14:21:09.978951] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.978959] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.978965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.500 [2024-06-10 14:21:09.978969] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.500 [2024-06-10 14:21:09.978976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.978992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979011] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979018] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.500 [2024-06-10 14:21:09.979022] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.500 [2024-06-10 14:21:09.979028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979049] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979073] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979079] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.500 [2024-06-10 14:21:09.979084] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:32.500 [2024-06-10 14:21:09.979089] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:32.500 [2024-06-10 14:21:09.979108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979195] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.500 [2024-06-10 14:21:09.979200] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.500 [2024-06-10 14:21:09.979203] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.500 [2024-06-10 14:21:09.979207] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.500 [2024-06-10 14:21:09.979213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.500 [2024-06-10 14:21:09.979220] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.500 [2024-06-10 14:21:09.979224] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.500 [2024-06-10 14:21:09.979230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979237] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.500 [2024-06-10 14:21:09.979241] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.500 [2024-06-10 14:21:09.979247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.500 [2024-06-10 14:21:09.979259] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.500 [2024-06-10 14:21:09.979265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.500 [2024-06-10 14:21:09.979272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.500 [2024-06-10 14:21:09.979304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.500 ===================================================== 00:12:32.500 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.500 ===================================================== 00:12:32.500 Controller Capabilities/Features 00:12:32.500 ================================ 00:12:32.500 Vendor ID: 4e58 00:12:32.500 Subsystem Vendor ID: 4e58 00:12:32.500 Serial Number: SPDK1 00:12:32.500 Model Number: SPDK bdev Controller 00:12:32.500 Firmware Version: 24.09 00:12:32.500 Recommended Arb Burst: 6 00:12:32.500 IEEE OUI Identifier: 8d 6b 50 00:12:32.500 Multi-path I/O 00:12:32.500 May have multiple subsystem ports: Yes 00:12:32.500 May have multiple controllers: Yes 00:12:32.500 Associated with SR-IOV VF: No 00:12:32.500 Max Data Transfer Size: 131072 00:12:32.500 Max Number of Namespaces: 32 00:12:32.500 Max Number of I/O Queues: 127 00:12:32.500 NVMe Specification Version (VS): 1.3 00:12:32.500 NVMe Specification Version (Identify): 1.3 00:12:32.500 Maximum Queue Entries: 256 00:12:32.500 Contiguous Queues Required: Yes 00:12:32.500 Arbitration Mechanisms Supported 00:12:32.500 Weighted Round Robin: Not Supported 00:12:32.500 Vendor Specific: Not Supported 00:12:32.500 Reset Timeout: 15000 ms 00:12:32.500 Doorbell Stride: 4 bytes 00:12:32.500 NVM Subsystem Reset: Not Supported 00:12:32.500 Command Sets Supported 00:12:32.500 NVM Command Set: Supported 00:12:32.500 Boot Partition: Not Supported 00:12:32.500 Memory Page Size Minimum: 4096 bytes 00:12:32.500 Memory Page Size Maximum: 4096 bytes 00:12:32.500 Persistent Memory Region: Not Supported 00:12:32.500 Optional Asynchronous Events Supported 00:12:32.500 Namespace Attribute Notices: Supported 00:12:32.500 Firmware Activation Notices: Not Supported 00:12:32.500 ANA Change Notices: Not Supported 00:12:32.500 PLE Aggregate Log Change Notices: Not Supported 00:12:32.500 LBA Status Info Alert Notices: Not Supported 00:12:32.500 EGE Aggregate Log Change Notices: Not Supported 00:12:32.500 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.500 Zone Descriptor Change Notices: Not Supported 00:12:32.500 Discovery Log Change Notices: Not Supported 00:12:32.500 Controller Attributes 00:12:32.500 128-bit Host Identifier: Supported 00:12:32.500 Non-Operational Permissive Mode: Not Supported 00:12:32.500 NVM Sets: Not Supported 00:12:32.500 Read Recovery Levels: Not Supported 00:12:32.500 Endurance Groups: Not Supported 00:12:32.500 Predictable Latency Mode: Not Supported 00:12:32.500 Traffic Based Keep ALive: Not Supported 00:12:32.500 Namespace Granularity: Not Supported 00:12:32.500 SQ Associations: Not Supported 00:12:32.500 UUID List: Not Supported 00:12:32.500 Multi-Domain Subsystem: Not Supported 00:12:32.500 Fixed Capacity Management: Not Supported 00:12:32.500 Variable Capacity Management: Not Supported 00:12:32.500 Delete Endurance Group: Not Supported 00:12:32.500 Delete NVM Set: Not Supported 00:12:32.500 Extended LBA Formats Supported: Not Supported 00:12:32.500 Flexible Data Placement Supported: Not Supported 00:12:32.500 00:12:32.501 Controller Memory Buffer Support 00:12:32.501 ================================ 00:12:32.501 Supported: No 00:12:32.501 00:12:32.501 Persistent Memory Region Support 00:12:32.501 ================================ 00:12:32.501 Supported: No 00:12:32.501 00:12:32.501 Admin Command Set Attributes 00:12:32.501 ============================ 00:12:32.501 Security Send/Receive: Not Supported 00:12:32.501 Format NVM: Not Supported 00:12:32.501 Firmware Activate/Download: Not Supported 00:12:32.501 Namespace Management: Not Supported 00:12:32.501 Device Self-Test: Not Supported 00:12:32.501 Directives: Not Supported 00:12:32.501 NVMe-MI: Not Supported 00:12:32.501 Virtualization Management: Not Supported 00:12:32.501 Doorbell Buffer Config: Not Supported 00:12:32.501 Get LBA Status Capability: Not Supported 00:12:32.501 Command & Feature Lockdown Capability: Not Supported 00:12:32.501 Abort Command Limit: 4 00:12:32.501 Async Event Request Limit: 4 00:12:32.501 Number of Firmware Slots: N/A 00:12:32.501 Firmware Slot 1 Read-Only: N/A 00:12:32.501 Firmware Activation Without Reset: N/A 00:12:32.501 Multiple Update Detection Support: N/A 00:12:32.501 Firmware Update Granularity: No Information Provided 00:12:32.501 Per-Namespace SMART Log: No 00:12:32.501 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.501 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.501 Command Effects Log Page: Supported 00:12:32.501 Get Log Page Extended Data: Supported 00:12:32.501 Telemetry Log Pages: Not Supported 00:12:32.501 Persistent Event Log Pages: Not Supported 00:12:32.501 Supported Log Pages Log Page: May Support 00:12:32.501 Commands Supported & Effects Log Page: Not Supported 00:12:32.501 Feature Identifiers & Effects Log Page:May Support 00:12:32.501 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.501 Data Area 4 for Telemetry Log: Not Supported 00:12:32.501 Error Log Page Entries Supported: 128 00:12:32.501 Keep Alive: Supported 00:12:32.501 Keep Alive Granularity: 10000 ms 00:12:32.501 00:12:32.501 NVM Command Set Attributes 00:12:32.501 ========================== 00:12:32.501 Submission Queue Entry Size 00:12:32.501 Max: 64 00:12:32.501 Min: 64 00:12:32.501 Completion Queue Entry Size 00:12:32.501 Max: 16 00:12:32.501 Min: 16 00:12:32.501 Number of Namespaces: 32 00:12:32.501 Compare Command: Supported 00:12:32.501 Write Uncorrectable Command: Not Supported 00:12:32.501 Dataset Management Command: Supported 00:12:32.501 Write Zeroes Command: Supported 00:12:32.501 Set Features Save Field: Not Supported 00:12:32.501 Reservations: Not Supported 00:12:32.501 Timestamp: Not Supported 00:12:32.501 Copy: Supported 00:12:32.501 Volatile Write Cache: Present 00:12:32.501 Atomic Write Unit (Normal): 1 00:12:32.501 Atomic Write Unit (PFail): 1 00:12:32.501 Atomic Compare & Write Unit: 1 00:12:32.501 Fused Compare & Write: Supported 00:12:32.501 Scatter-Gather List 00:12:32.501 SGL Command Set: Supported (Dword aligned) 00:12:32.501 SGL Keyed: Not Supported 00:12:32.501 SGL Bit Bucket Descriptor: Not Supported 00:12:32.501 SGL Metadata Pointer: Not Supported 00:12:32.501 Oversized SGL: Not Supported 00:12:32.501 SGL Metadata Address: Not Supported 00:12:32.501 SGL Offset: Not Supported 00:12:32.501 Transport SGL Data Block: Not Supported 00:12:32.501 Replay Protected Memory Block: Not Supported 00:12:32.501 00:12:32.501 Firmware Slot Information 00:12:32.501 ========================= 00:12:32.501 Active slot: 1 00:12:32.501 Slot 1 Firmware Revision: 24.09 00:12:32.501 00:12:32.501 00:12:32.501 Commands Supported and Effects 00:12:32.501 ============================== 00:12:32.501 Admin Commands 00:12:32.501 -------------- 00:12:32.501 Get Log Page (02h): Supported 00:12:32.501 Identify (06h): Supported 00:12:32.501 Abort (08h): Supported 00:12:32.501 Set Features (09h): Supported 00:12:32.501 Get Features (0Ah): Supported 00:12:32.501 Asynchronous Event Request (0Ch): Supported 00:12:32.501 Keep Alive (18h): Supported 00:12:32.501 I/O Commands 00:12:32.501 ------------ 00:12:32.501 Flush (00h): Supported LBA-Change 00:12:32.501 Write (01h): Supported LBA-Change 00:12:32.501 Read (02h): Supported 00:12:32.501 Compare (05h): Supported 00:12:32.501 Write Zeroes (08h): Supported LBA-Change 00:12:32.501 Dataset Management (09h): Supported LBA-Change 00:12:32.501 Copy (19h): Supported LBA-Change 00:12:32.501 Unknown (79h): Supported LBA-Change 00:12:32.501 Unknown (7Ah): Supported 00:12:32.501 00:12:32.501 Error Log 00:12:32.501 ========= 00:12:32.501 00:12:32.501 Arbitration 00:12:32.501 =========== 00:12:32.501 Arbitration Burst: 1 00:12:32.501 00:12:32.501 Power Management 00:12:32.501 ================ 00:12:32.501 Number of Power States: 1 00:12:32.501 Current Power State: Power State #0 00:12:32.501 Power State #0: 00:12:32.501 Max Power: 0.00 W 00:12:32.501 Non-Operational State: Operational 00:12:32.501 Entry Latency: Not Reported 00:12:32.501 Exit Latency: Not Reported 00:12:32.501 Relative Read Throughput: 0 00:12:32.501 Relative Read Latency: 0 00:12:32.501 Relative Write Throughput: 0 00:12:32.501 Relative Write Latency: 0 00:12:32.501 Idle Power: Not Reported 00:12:32.501 Active Power: Not Reported 00:12:32.501 Non-Operational Permissive Mode: Not Supported 00:12:32.501 00:12:32.501 Health Information 00:12:32.501 ================== 00:12:32.501 Critical Warnings: 00:12:32.501 Available Spare Space: OK 00:12:32.501 Temperature: OK 00:12:32.501 Device Reliability: OK 00:12:32.501 Read Only: No 00:12:32.501 Volatile Memory Backup: OK 00:12:32.501 Current Temperature: 0 Kelvin (-2[2024-06-10 14:21:09.979405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.501 [2024-06-10 14:21:09.979414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.501 [2024-06-10 14:21:09.979438] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:32.501 [2024-06-10 14:21:09.979447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.501 [2024-06-10 14:21:09.979453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.501 [2024-06-10 14:21:09.979459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.501 [2024-06-10 14:21:09.979465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.501 [2024-06-10 14:21:09.979553] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.501 [2024-06-10 14:21:09.979563] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.501 [2024-06-10 14:21:09.980548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.501 [2024-06-10 14:21:09.980596] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:32.501 [2024-06-10 14:21:09.980602] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:32.501 [2024-06-10 14:21:09.981564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.501 [2024-06-10 14:21:09.981574] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:32.501 [2024-06-10 14:21:09.981635] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.501 [2024-06-10 14:21:09.986321] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.501 73 Celsius) 00:12:32.501 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.501 Available Spare: 0% 00:12:32.501 Available Spare Threshold: 0% 00:12:32.501 Life Percentage Used: 0% 00:12:32.501 Data Units Read: 0 00:12:32.501 Data Units Written: 0 00:12:32.501 Host Read Commands: 0 00:12:32.501 Host Write Commands: 0 00:12:32.501 Controller Busy Time: 0 minutes 00:12:32.501 Power Cycles: 0 00:12:32.501 Power On Hours: 0 hours 00:12:32.501 Unsafe Shutdowns: 0 00:12:32.501 Unrecoverable Media Errors: 0 00:12:32.501 Lifetime Error Log Entries: 0 00:12:32.501 Warning Temperature Time: 0 minutes 00:12:32.501 Critical Temperature Time: 0 minutes 00:12:32.501 00:12:32.501 Number of Queues 00:12:32.501 ================ 00:12:32.501 Number of I/O Submission Queues: 127 00:12:32.501 Number of I/O Completion Queues: 127 00:12:32.501 00:12:32.501 Active Namespaces 00:12:32.501 ================= 00:12:32.501 Namespace ID:1 00:12:32.501 Error Recovery Timeout: Unlimited 00:12:32.501 Command Set Identifier: NVM (00h) 00:12:32.501 Deallocate: Supported 00:12:32.501 Deallocated/Unwritten Error: Not Supported 00:12:32.502 Deallocated Read Value: Unknown 00:12:32.502 Deallocate in Write Zeroes: Not Supported 00:12:32.502 Deallocated Guard Field: 0xFFFF 00:12:32.502 Flush: Supported 00:12:32.502 Reservation: Supported 00:12:32.502 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.502 Size (in LBAs): 131072 (0GiB) 00:12:32.502 Capacity (in LBAs): 131072 (0GiB) 00:12:32.502 Utilization (in LBAs): 131072 (0GiB) 00:12:32.502 NGUID: 646AA5BC1CC6471E99C401DDAA811B53 00:12:32.502 UUID: 646aa5bc-1cc6-471e-99c4-01ddaa811b53 00:12:32.502 Thin Provisioning: Not Supported 00:12:32.502 Per-NS Atomic Units: Yes 00:12:32.502 Atomic Boundary Size (Normal): 0 00:12:32.502 Atomic Boundary Size (PFail): 0 00:12:32.502 Atomic Boundary Offset: 0 00:12:32.502 Maximum Single Source Range Length: 65535 00:12:32.502 Maximum Copy Length: 65535 00:12:32.502 Maximum Source Range Count: 1 00:12:32.502 NGUID/EUI64 Never Reused: No 00:12:32.502 Namespace Write Protected: No 00:12:32.502 Number of LBA Formats: 1 00:12:32.502 Current LBA Format: LBA Format #00 00:12:32.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.502 00:12:32.502 14:21:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:32.502 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.763 [2024-06-10 14:21:10.191023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.050 Initializing NVMe Controllers 00:12:38.050 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.050 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:38.050 Initialization complete. Launching workers. 00:12:38.050 ======================================================== 00:12:38.050 Latency(us) 00:12:38.050 Device Information : IOPS MiB/s Average min max 00:12:38.050 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34696.36 135.53 3690.43 1206.94 10584.94 00:12:38.050 ======================================================== 00:12:38.050 Total : 34696.36 135.53 3690.43 1206.94 10584.94 00:12:38.050 00:12:38.050 [2024-06-10 14:21:15.216862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.050 14:21:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:38.050 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.050 [2024-06-10 14:21:15.417849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.402 Initializing NVMe Controllers 00:12:43.402 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.402 Initialization complete. Launching workers. 00:12:43.402 ======================================================== 00:12:43.402 Latency(us) 00:12:43.402 Device Information : IOPS MiB/s Average min max 00:12:43.402 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.53 7493.57 8385.82 00:12:43.402 ======================================================== 00:12:43.402 Total : 16051.20 62.70 7980.53 7493.57 8385.82 00:12:43.402 00:12:43.402 [2024-06-10 14:21:20.452103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.402 14:21:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:43.402 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.402 [2024-06-10 14:21:20.672091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.704 [2024-06-10 14:21:25.733497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.704 Initializing NVMe Controllers 00:12:48.704 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.704 Initialization complete. Launching workers. 00:12:48.704 Starting thread on core 2 00:12:48.704 Starting thread on core 3 00:12:48.704 Starting thread on core 1 00:12:48.704 14:21:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:48.704 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.705 [2024-06-10 14:21:26.001789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.006 [2024-06-10 14:21:29.060453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.006 Initializing NVMe Controllers 00:12:52.006 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.006 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:52.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:52.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:52.006 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:52.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:52.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:52.006 Initialization complete. Launching workers. 00:12:52.006 Starting thread on core 1 with urgent priority queue 00:12:52.006 Starting thread on core 2 with urgent priority queue 00:12:52.006 Starting thread on core 3 with urgent priority queue 00:12:52.006 Starting thread on core 0 with urgent priority queue 00:12:52.006 SPDK bdev Controller (SPDK1 ) core 0: 12327.00 IO/s 8.11 secs/100000 ios 00:12:52.006 SPDK bdev Controller (SPDK1 ) core 1: 11095.33 IO/s 9.01 secs/100000 ios 00:12:52.006 SPDK bdev Controller (SPDK1 ) core 2: 10632.00 IO/s 9.41 secs/100000 ios 00:12:52.006 SPDK bdev Controller (SPDK1 ) core 3: 12775.67 IO/s 7.83 secs/100000 ios 00:12:52.006 ======================================================== 00:12:52.006 00:12:52.006 14:21:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:52.006 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.006 [2024-06-10 14:21:29.321884] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.006 Initializing NVMe Controllers 00:12:52.006 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.006 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:52.006 Namespace ID: 1 size: 0GB 00:12:52.006 Initialization complete. 00:12:52.006 INFO: using host memory buffer for IO 00:12:52.006 Hello world! 00:12:52.006 [2024-06-10 14:21:29.355117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.006 14:21:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:52.006 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.266 [2024-06-10 14:21:29.614799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.207 Initializing NVMe Controllers 00:12:53.207 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.207 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.207 Initialization complete. Launching workers. 00:12:53.207 submit (in ns) avg, min, max = 8376.8, 3927.5, 4003785.0 00:12:53.207 complete (in ns) avg, min, max = 18631.7, 2397.5, 4994372.5 00:12:53.207 00:12:53.207 Submit histogram 00:12:53.207 ================ 00:12:53.207 Range in us Cumulative Count 00:12:53.207 3.920 - 3.947: 0.0567% ( 11) 00:12:53.207 3.947 - 3.973: 2.6515% ( 503) 00:12:53.207 3.973 - 4.000: 9.2546% ( 1280) 00:12:53.207 4.000 - 4.027: 19.2778% ( 1943) 00:12:53.207 4.027 - 4.053: 30.8022% ( 2234) 00:12:53.207 4.053 - 4.080: 41.4496% ( 2064) 00:12:53.207 4.080 - 4.107: 54.4751% ( 2525) 00:12:53.207 4.107 - 4.133: 70.2038% ( 3049) 00:12:53.207 4.133 - 4.160: 83.5440% ( 2586) 00:12:53.207 4.160 - 4.187: 92.0815% ( 1655) 00:12:53.207 4.187 - 4.213: 96.5953% ( 875) 00:12:53.207 4.213 - 4.240: 98.5040% ( 370) 00:12:53.207 4.240 - 4.267: 99.2107% ( 137) 00:12:53.207 4.267 - 4.293: 99.4326% ( 43) 00:12:53.207 4.293 - 4.320: 99.4893% ( 11) 00:12:53.207 4.320 - 4.347: 99.5202% ( 6) 00:12:53.207 4.533 - 4.560: 99.5254% ( 1) 00:12:53.207 4.560 - 4.587: 99.5306% ( 1) 00:12:53.207 4.613 - 4.640: 99.5357% ( 1) 00:12:53.207 4.800 - 4.827: 99.5409% ( 1) 00:12:53.207 4.987 - 5.013: 99.5460% ( 1) 00:12:53.207 5.040 - 5.067: 99.5512% ( 1) 00:12:53.207 5.520 - 5.547: 99.5564% ( 1) 00:12:53.207 5.573 - 5.600: 99.5615% ( 1) 00:12:53.207 5.680 - 5.707: 99.5667% ( 1) 00:12:53.207 5.733 - 5.760: 99.5770% ( 2) 00:12:53.207 5.787 - 5.813: 99.5822% ( 1) 00:12:53.207 5.920 - 5.947: 99.5873% ( 1) 00:12:53.207 5.947 - 5.973: 99.5925% ( 1) 00:12:53.207 6.000 - 6.027: 99.6079% ( 3) 00:12:53.207 6.027 - 6.053: 99.6131% ( 1) 00:12:53.207 6.080 - 6.107: 99.6286% ( 3) 00:12:53.208 6.107 - 6.133: 99.6441% ( 3) 00:12:53.208 6.133 - 6.160: 99.6492% ( 1) 00:12:53.208 6.160 - 6.187: 99.6544% ( 1) 00:12:53.208 6.293 - 6.320: 99.6698% ( 3) 00:12:53.208 6.320 - 6.347: 99.6750% ( 1) 00:12:53.208 6.373 - 6.400: 99.6802% ( 1) 00:12:53.208 6.400 - 6.427: 99.6853% ( 1) 00:12:53.208 6.453 - 6.480: 99.6905% ( 1) 00:12:53.208 6.507 - 6.533: 99.6956% ( 1) 00:12:53.208 6.560 - 6.587: 99.7008% ( 1) 00:12:53.208 6.613 - 6.640: 99.7060% ( 1) 00:12:53.208 6.693 - 6.720: 99.7111% ( 1) 00:12:53.208 6.747 - 6.773: 99.7163% ( 1) 00:12:53.208 6.773 - 6.800: 99.7266% ( 2) 00:12:53.208 6.800 - 6.827: 99.7369% ( 2) 00:12:53.208 6.880 - 6.933: 99.7524% ( 3) 00:12:53.208 6.987 - 7.040: 99.7575% ( 1) 00:12:53.208 7.040 - 7.093: 99.7730% ( 3) 00:12:53.208 7.147 - 7.200: 99.7782% ( 1) 00:12:53.208 7.200 - 7.253: 99.7885% ( 2) 00:12:53.208 7.253 - 7.307: 99.8143% ( 5) 00:12:53.208 7.307 - 7.360: 99.8194% ( 1) 00:12:53.208 7.360 - 7.413: 99.8246% ( 1) 00:12:53.208 7.467 - 7.520: 99.8298% ( 1) 00:12:53.208 7.520 - 7.573: 99.8349% ( 1) 00:12:53.208 7.573 - 7.627: 99.8401% ( 1) 00:12:53.208 7.680 - 7.733: 99.8452% ( 1) 00:12:53.208 7.840 - 7.893: 99.8504% ( 1) 00:12:53.208 9.013 - 9.067: 99.8556% ( 1) 00:12:53.208 9.387 - 9.440: 99.8607% ( 1) 00:12:53.208 10.400 - 10.453: 99.8710% ( 2) 00:12:53.208 20.373 - 20.480: 99.8762% ( 1) 00:12:53.208 23.680 - 23.787: 99.8814% ( 1) 00:12:53.208 27.733 - 27.947: 99.8865% ( 1) 00:12:53.208 35.413 - 35.627: 99.8917% ( 1) 00:12:53.208 2880.853 - 2894.507: 99.8968% ( 1) 00:12:53.208 3986.773 - 4014.080: 100.0000% ( 20) 00:12:53.208 00:12:53.208 Complete histogram 00:12:53.208 ================== 00:12:53.208 Range in us Cumulative Count 00:12:53.208 2.387 - 2.400: 0.0206% ( 4) 00:12:53.208 2.400 - 2.413: 0.9182% ( 174) 00:12:53.208 2.413 - 2.427: 1.0730% ( 30) 00:12:53.208 2.427 - 2.440: 1.2639% ( 37) 00:12:53.208 2.440 - 2.453: 1.2897% ( 5) 00:12:53.208 2.453 - 2.467: 3.7606% ( 479) 00:12:53.208 2.467 - 2.480: 45.5713% ( 8105) 00:12:53.208 2.480 - 2.493: 55.7802% ( 1979) 00:12:53.208 2.493 - [2024-06-10 14:21:30.636378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.208 2.507: 70.5907% ( 2871) 00:12:53.208 2.507 - 2.520: 78.1481% ( 1465) 00:12:53.208 2.520 - 2.533: 81.7023% ( 689) 00:12:53.208 2.533 - 2.547: 85.9995% ( 833) 00:12:53.208 2.547 - 2.560: 91.1839% ( 1005) 00:12:53.208 2.560 - 2.573: 94.7072% ( 683) 00:12:53.208 2.573 - 2.587: 97.0905% ( 462) 00:12:53.208 2.587 - 2.600: 98.5298% ( 279) 00:12:53.208 2.600 - 2.613: 99.1849% ( 127) 00:12:53.208 2.613 - 2.627: 99.2933% ( 21) 00:12:53.208 2.627 - 2.640: 99.3087% ( 3) 00:12:53.208 2.640 - 2.653: 99.3139% ( 1) 00:12:53.208 2.653 - 2.667: 99.3191% ( 1) 00:12:53.208 2.667 - 2.680: 99.3242% ( 1) 00:12:53.208 2.907 - 2.920: 99.3294% ( 1) 00:12:53.208 4.213 - 4.240: 99.3345% ( 1) 00:12:53.208 4.240 - 4.267: 99.3397% ( 1) 00:12:53.208 4.293 - 4.320: 99.3449% ( 1) 00:12:53.208 4.347 - 4.373: 99.3500% ( 1) 00:12:53.208 4.427 - 4.453: 99.3603% ( 2) 00:12:53.208 4.480 - 4.507: 99.3758% ( 3) 00:12:53.208 4.533 - 4.560: 99.3810% ( 1) 00:12:53.208 4.560 - 4.587: 99.3861% ( 1) 00:12:53.208 4.667 - 4.693: 99.3913% ( 1) 00:12:53.208 4.693 - 4.720: 99.3964% ( 1) 00:12:53.208 4.773 - 4.800: 99.4016% ( 1) 00:12:53.208 4.853 - 4.880: 99.4068% ( 1) 00:12:53.208 4.960 - 4.987: 99.4119% ( 1) 00:12:53.208 5.040 - 5.067: 99.4171% ( 1) 00:12:53.208 5.067 - 5.093: 99.4222% ( 1) 00:12:53.208 5.093 - 5.120: 99.4326% ( 2) 00:12:53.208 5.120 - 5.147: 99.4377% ( 1) 00:12:53.208 5.147 - 5.173: 99.4429% ( 1) 00:12:53.208 5.173 - 5.200: 99.4532% ( 2) 00:12:53.208 5.253 - 5.280: 99.4583% ( 1) 00:12:53.208 5.387 - 5.413: 99.4635% ( 1) 00:12:53.208 5.413 - 5.440: 99.4687% ( 1) 00:12:53.208 5.467 - 5.493: 99.4738% ( 1) 00:12:53.208 5.547 - 5.573: 99.4841% ( 2) 00:12:53.208 5.680 - 5.707: 99.4893% ( 1) 00:12:53.208 5.707 - 5.733: 99.4945% ( 1) 00:12:53.208 5.760 - 5.787: 99.4996% ( 1) 00:12:53.208 5.787 - 5.813: 99.5048% ( 1) 00:12:53.208 5.813 - 5.840: 99.5099% ( 1) 00:12:53.208 5.867 - 5.893: 99.5151% ( 1) 00:12:53.208 5.893 - 5.920: 99.5202% ( 1) 00:12:53.208 6.000 - 6.027: 99.5254% ( 1) 00:12:53.208 6.027 - 6.053: 99.5306% ( 1) 00:12:53.208 6.107 - 6.133: 99.5357% ( 1) 00:12:53.208 6.133 - 6.160: 99.5409% ( 1) 00:12:53.208 6.213 - 6.240: 99.5460% ( 1) 00:12:53.208 6.373 - 6.400: 99.5512% ( 1) 00:12:53.208 6.533 - 6.560: 99.5564% ( 1) 00:12:53.208 6.773 - 6.800: 99.5615% ( 1) 00:12:53.208 7.200 - 7.253: 99.5667% ( 1) 00:12:53.208 7.360 - 7.413: 99.5718% ( 1) 00:12:53.208 7.467 - 7.520: 99.5770% ( 1) 00:12:53.208 12.853 - 12.907: 99.5822% ( 1) 00:12:53.208 13.280 - 13.333: 99.5873% ( 1) 00:12:53.208 13.387 - 13.440: 99.5925% ( 1) 00:12:53.208 14.187 - 14.293: 99.5976% ( 1) 00:12:53.208 3986.773 - 4014.080: 99.9948% ( 77) 00:12:53.208 4969.813 - 4997.120: 100.0000% ( 1) 00:12:53.208 00:12:53.208 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:53.208 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:53.208 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:53.208 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:53.208 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.468 [ 00:12:53.468 { 00:12:53.468 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.468 "subtype": "Discovery", 00:12:53.468 "listen_addresses": [], 00:12:53.468 "allow_any_host": true, 00:12:53.468 "hosts": [] 00:12:53.468 }, 00:12:53.468 { 00:12:53.468 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.468 "subtype": "NVMe", 00:12:53.468 "listen_addresses": [ 00:12:53.468 { 00:12:53.468 "trtype": "VFIOUSER", 00:12:53.468 "adrfam": "IPv4", 00:12:53.468 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.468 "trsvcid": "0" 00:12:53.468 } 00:12:53.468 ], 00:12:53.468 "allow_any_host": true, 00:12:53.468 "hosts": [], 00:12:53.468 "serial_number": "SPDK1", 00:12:53.468 "model_number": "SPDK bdev Controller", 00:12:53.468 "max_namespaces": 32, 00:12:53.468 "min_cntlid": 1, 00:12:53.468 "max_cntlid": 65519, 00:12:53.468 "namespaces": [ 00:12:53.468 { 00:12:53.468 "nsid": 1, 00:12:53.468 "bdev_name": "Malloc1", 00:12:53.468 "name": "Malloc1", 00:12:53.468 "nguid": "646AA5BC1CC6471E99C401DDAA811B53", 00:12:53.468 "uuid": "646aa5bc-1cc6-471e-99c4-01ddaa811b53" 00:12:53.468 } 00:12:53.468 ] 00:12:53.468 }, 00:12:53.468 { 00:12:53.468 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.468 "subtype": "NVMe", 00:12:53.468 "listen_addresses": [ 00:12:53.468 { 00:12:53.468 "trtype": "VFIOUSER", 00:12:53.468 "adrfam": "IPv4", 00:12:53.468 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.468 "trsvcid": "0" 00:12:53.468 } 00:12:53.468 ], 00:12:53.469 "allow_any_host": true, 00:12:53.469 "hosts": [], 00:12:53.469 "serial_number": "SPDK2", 00:12:53.469 "model_number": "SPDK bdev Controller", 00:12:53.469 "max_namespaces": 32, 00:12:53.469 "min_cntlid": 1, 00:12:53.469 "max_cntlid": 65519, 00:12:53.469 "namespaces": [ 00:12:53.469 { 00:12:53.469 "nsid": 1, 00:12:53.469 "bdev_name": "Malloc2", 00:12:53.469 "name": "Malloc2", 00:12:53.469 "nguid": "A43C610C538146ADA214C64566CD0D76", 00:12:53.469 "uuid": "a43c610c-5381-46ad-a214-c64566cd0d76" 00:12:53.469 } 00:12:53.469 ] 00:12:53.469 } 00:12:53.469 ] 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2941685 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:53.469 14:21:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:53.469 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.729 [2024-06-10 14:21:31.066828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.729 Malloc3 00:12:53.729 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:53.989 [2024-06-10 14:21:31.330929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.989 Asynchronous Event Request test 00:12:53.989 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.989 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.989 Registering asynchronous event callbacks... 00:12:53.989 Starting namespace attribute notice tests for all controllers... 00:12:53.989 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:53.989 aer_cb - Changed Namespace 00:12:53.989 Cleaning up... 00:12:53.989 [ 00:12:53.989 { 00:12:53.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.989 "subtype": "Discovery", 00:12:53.989 "listen_addresses": [], 00:12:53.989 "allow_any_host": true, 00:12:53.989 "hosts": [] 00:12:53.989 }, 00:12:53.989 { 00:12:53.989 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.989 "subtype": "NVMe", 00:12:53.989 "listen_addresses": [ 00:12:53.989 { 00:12:53.989 "trtype": "VFIOUSER", 00:12:53.989 "adrfam": "IPv4", 00:12:53.989 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.989 "trsvcid": "0" 00:12:53.989 } 00:12:53.989 ], 00:12:53.989 "allow_any_host": true, 00:12:53.989 "hosts": [], 00:12:53.989 "serial_number": "SPDK1", 00:12:53.989 "model_number": "SPDK bdev Controller", 00:12:53.989 "max_namespaces": 32, 00:12:53.989 "min_cntlid": 1, 00:12:53.989 "max_cntlid": 65519, 00:12:53.989 "namespaces": [ 00:12:53.989 { 00:12:53.989 "nsid": 1, 00:12:53.989 "bdev_name": "Malloc1", 00:12:53.989 "name": "Malloc1", 00:12:53.989 "nguid": "646AA5BC1CC6471E99C401DDAA811B53", 00:12:53.989 "uuid": "646aa5bc-1cc6-471e-99c4-01ddaa811b53" 00:12:53.989 }, 00:12:53.989 { 00:12:53.989 "nsid": 2, 00:12:53.989 "bdev_name": "Malloc3", 00:12:53.989 "name": "Malloc3", 00:12:53.989 "nguid": "7F05CEF2CA69411DBCB6FFAC38596E10", 00:12:53.989 "uuid": "7f05cef2-ca69-411d-bcb6-ffac38596e10" 00:12:53.989 } 00:12:53.989 ] 00:12:53.989 }, 00:12:53.989 { 00:12:53.989 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.989 "subtype": "NVMe", 00:12:53.989 "listen_addresses": [ 00:12:53.989 { 00:12:53.989 "trtype": "VFIOUSER", 00:12:53.989 "adrfam": "IPv4", 00:12:53.989 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.989 "trsvcid": "0" 00:12:53.989 } 00:12:53.989 ], 00:12:53.989 "allow_any_host": true, 00:12:53.989 "hosts": [], 00:12:53.989 "serial_number": "SPDK2", 00:12:53.989 "model_number": "SPDK bdev Controller", 00:12:53.989 "max_namespaces": 32, 00:12:53.989 "min_cntlid": 1, 00:12:53.989 "max_cntlid": 65519, 00:12:53.989 "namespaces": [ 00:12:53.989 { 00:12:53.989 "nsid": 1, 00:12:53.989 "bdev_name": "Malloc2", 00:12:53.989 "name": "Malloc2", 00:12:53.989 "nguid": "A43C610C538146ADA214C64566CD0D76", 00:12:53.989 "uuid": "a43c610c-5381-46ad-a214-c64566cd0d76" 00:12:53.989 } 00:12:53.989 ] 00:12:53.989 } 00:12:53.989 ] 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2941685 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:53.989 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:54.252 [2024-06-10 14:21:31.595856] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:12:54.252 [2024-06-10 14:21:31.595898] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941820 ] 00:12:54.252 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.252 [2024-06-10 14:21:31.627851] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:54.252 [2024-06-10 14:21:31.636547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.252 [2024-06-10 14:21:31.636567] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3ed412b000 00:12:54.252 [2024-06-10 14:21:31.637550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.638554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.639558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.640573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.641577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.642586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.643593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.644598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.252 [2024-06-10 14:21:31.645605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.252 [2024-06-10 14:21:31.645618] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3ed4120000 00:12:54.252 [2024-06-10 14:21:31.646944] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.252 [2024-06-10 14:21:31.667475] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:54.252 [2024-06-10 14:21:31.667497] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:54.252 [2024-06-10 14:21:31.669559] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:54.252 [2024-06-10 14:21:31.669604] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:54.252 [2024-06-10 14:21:31.669686] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:54.252 [2024-06-10 14:21:31.669703] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:54.252 [2024-06-10 14:21:31.669709] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:54.252 [2024-06-10 14:21:31.670566] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:54.252 [2024-06-10 14:21:31.670577] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:54.252 [2024-06-10 14:21:31.670585] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:54.252 [2024-06-10 14:21:31.671576] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:54.252 [2024-06-10 14:21:31.671587] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:54.252 [2024-06-10 14:21:31.671595] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:54.252 [2024-06-10 14:21:31.672582] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:54.253 [2024-06-10 14:21:31.672591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:54.253 [2024-06-10 14:21:31.673585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:54.253 [2024-06-10 14:21:31.673593] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:54.253 [2024-06-10 14:21:31.673599] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:54.253 [2024-06-10 14:21:31.673605] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:54.253 [2024-06-10 14:21:31.673710] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:54.253 [2024-06-10 14:21:31.673715] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:54.253 [2024-06-10 14:21:31.673720] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:54.253 [2024-06-10 14:21:31.674598] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:54.253 [2024-06-10 14:21:31.675606] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:54.253 [2024-06-10 14:21:31.676615] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.253 [2024-06-10 14:21:31.677618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.253 [2024-06-10 14:21:31.677658] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:54.253 [2024-06-10 14:21:31.678629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:54.253 [2024-06-10 14:21:31.678638] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:54.253 [2024-06-10 14:21:31.678643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.678664] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:54.253 [2024-06-10 14:21:31.678672] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.678686] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.253 [2024-06-10 14:21:31.678691] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.253 [2024-06-10 14:21:31.678703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.685323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.685336] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:54.253 [2024-06-10 14:21:31.685341] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:54.253 [2024-06-10 14:21:31.685347] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:54.253 [2024-06-10 14:21:31.685352] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:54.253 [2024-06-10 14:21:31.685357] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:54.253 [2024-06-10 14:21:31.685362] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:54.253 [2024-06-10 14:21:31.685366] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.685374] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.685384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.693321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.693334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.253 [2024-06-10 14:21:31.693343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.253 [2024-06-10 14:21:31.693351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.253 [2024-06-10 14:21:31.693359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.253 [2024-06-10 14:21:31.693366] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.693375] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.693384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.701320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.701329] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:54.253 [2024-06-10 14:21:31.701334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.701341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.701348] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.701357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.709321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.709374] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.709382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.709390] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:54.253 [2024-06-10 14:21:31.709394] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:54.253 [2024-06-10 14:21:31.709400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.717321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.717332] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:54.253 [2024-06-10 14:21:31.717344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.717352] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.717359] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.253 [2024-06-10 14:21:31.717363] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.253 [2024-06-10 14:21:31.717369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.725321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.725337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.725345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.725354] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.253 [2024-06-10 14:21:31.725358] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.253 [2024-06-10 14:21:31.725364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.733320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.733331] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733360] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:54.253 [2024-06-10 14:21:31.733364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:54.253 [2024-06-10 14:21:31.733369] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:54.253 [2024-06-10 14:21:31.733386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.741321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.741335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.749322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:54.253 [2024-06-10 14:21:31.749335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:54.253 [2024-06-10 14:21:31.757322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:54.254 [2024-06-10 14:21:31.757335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.254 [2024-06-10 14:21:31.763322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:54.254 [2024-06-10 14:21:31.763336] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:54.254 [2024-06-10 14:21:31.763341] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:54.254 [2024-06-10 14:21:31.763344] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:54.254 [2024-06-10 14:21:31.763349] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:54.254 [2024-06-10 14:21:31.763355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:54.254 [2024-06-10 14:21:31.763362] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:54.254 [2024-06-10 14:21:31.763369] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:54.254 [2024-06-10 14:21:31.763375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:54.254 [2024-06-10 14:21:31.763382] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:54.254 [2024-06-10 14:21:31.763386] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.254 [2024-06-10 14:21:31.763392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.254 [2024-06-10 14:21:31.763400] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:54.254 [2024-06-10 14:21:31.763404] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:54.254 [2024-06-10 14:21:31.763410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:54.254 [2024-06-10 14:21:31.773323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:54.254 [2024-06-10 14:21:31.773338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:54.254 [2024-06-10 14:21:31.773347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:54.254 [2024-06-10 14:21:31.773356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:54.254 ===================================================== 00:12:54.254 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.254 ===================================================== 00:12:54.254 Controller Capabilities/Features 00:12:54.254 ================================ 00:12:54.254 Vendor ID: 4e58 00:12:54.254 Subsystem Vendor ID: 4e58 00:12:54.254 Serial Number: SPDK2 00:12:54.254 Model Number: SPDK bdev Controller 00:12:54.254 Firmware Version: 24.09 00:12:54.254 Recommended Arb Burst: 6 00:12:54.254 IEEE OUI Identifier: 8d 6b 50 00:12:54.254 Multi-path I/O 00:12:54.254 May have multiple subsystem ports: Yes 00:12:54.254 May have multiple controllers: Yes 00:12:54.254 Associated with SR-IOV VF: No 00:12:54.254 Max Data Transfer Size: 131072 00:12:54.254 Max Number of Namespaces: 32 00:12:54.254 Max Number of I/O Queues: 127 00:12:54.254 NVMe Specification Version (VS): 1.3 00:12:54.254 NVMe Specification Version (Identify): 1.3 00:12:54.254 Maximum Queue Entries: 256 00:12:54.254 Contiguous Queues Required: Yes 00:12:54.254 Arbitration Mechanisms Supported 00:12:54.254 Weighted Round Robin: Not Supported 00:12:54.254 Vendor Specific: Not Supported 00:12:54.254 Reset Timeout: 15000 ms 00:12:54.254 Doorbell Stride: 4 bytes 00:12:54.254 NVM Subsystem Reset: Not Supported 00:12:54.254 Command Sets Supported 00:12:54.254 NVM Command Set: Supported 00:12:54.254 Boot Partition: Not Supported 00:12:54.254 Memory Page Size Minimum: 4096 bytes 00:12:54.254 Memory Page Size Maximum: 4096 bytes 00:12:54.254 Persistent Memory Region: Not Supported 00:12:54.254 Optional Asynchronous Events Supported 00:12:54.254 Namespace Attribute Notices: Supported 00:12:54.254 Firmware Activation Notices: Not Supported 00:12:54.254 ANA Change Notices: Not Supported 00:12:54.254 PLE Aggregate Log Change Notices: Not Supported 00:12:54.254 LBA Status Info Alert Notices: Not Supported 00:12:54.254 EGE Aggregate Log Change Notices: Not Supported 00:12:54.254 Normal NVM Subsystem Shutdown event: Not Supported 00:12:54.254 Zone Descriptor Change Notices: Not Supported 00:12:54.254 Discovery Log Change Notices: Not Supported 00:12:54.254 Controller Attributes 00:12:54.254 128-bit Host Identifier: Supported 00:12:54.254 Non-Operational Permissive Mode: Not Supported 00:12:54.254 NVM Sets: Not Supported 00:12:54.254 Read Recovery Levels: Not Supported 00:12:54.254 Endurance Groups: Not Supported 00:12:54.254 Predictable Latency Mode: Not Supported 00:12:54.254 Traffic Based Keep ALive: Not Supported 00:12:54.254 Namespace Granularity: Not Supported 00:12:54.254 SQ Associations: Not Supported 00:12:54.254 UUID List: Not Supported 00:12:54.254 Multi-Domain Subsystem: Not Supported 00:12:54.254 Fixed Capacity Management: Not Supported 00:12:54.254 Variable Capacity Management: Not Supported 00:12:54.254 Delete Endurance Group: Not Supported 00:12:54.254 Delete NVM Set: Not Supported 00:12:54.254 Extended LBA Formats Supported: Not Supported 00:12:54.254 Flexible Data Placement Supported: Not Supported 00:12:54.254 00:12:54.254 Controller Memory Buffer Support 00:12:54.254 ================================ 00:12:54.254 Supported: No 00:12:54.254 00:12:54.254 Persistent Memory Region Support 00:12:54.254 ================================ 00:12:54.254 Supported: No 00:12:54.254 00:12:54.254 Admin Command Set Attributes 00:12:54.254 ============================ 00:12:54.254 Security Send/Receive: Not Supported 00:12:54.254 Format NVM: Not Supported 00:12:54.254 Firmware Activate/Download: Not Supported 00:12:54.254 Namespace Management: Not Supported 00:12:54.254 Device Self-Test: Not Supported 00:12:54.254 Directives: Not Supported 00:12:54.254 NVMe-MI: Not Supported 00:12:54.254 Virtualization Management: Not Supported 00:12:54.254 Doorbell Buffer Config: Not Supported 00:12:54.254 Get LBA Status Capability: Not Supported 00:12:54.254 Command & Feature Lockdown Capability: Not Supported 00:12:54.254 Abort Command Limit: 4 00:12:54.254 Async Event Request Limit: 4 00:12:54.254 Number of Firmware Slots: N/A 00:12:54.254 Firmware Slot 1 Read-Only: N/A 00:12:54.254 Firmware Activation Without Reset: N/A 00:12:54.254 Multiple Update Detection Support: N/A 00:12:54.254 Firmware Update Granularity: No Information Provided 00:12:54.254 Per-Namespace SMART Log: No 00:12:54.254 Asymmetric Namespace Access Log Page: Not Supported 00:12:54.254 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:54.254 Command Effects Log Page: Supported 00:12:54.254 Get Log Page Extended Data: Supported 00:12:54.254 Telemetry Log Pages: Not Supported 00:12:54.254 Persistent Event Log Pages: Not Supported 00:12:54.254 Supported Log Pages Log Page: May Support 00:12:54.254 Commands Supported & Effects Log Page: Not Supported 00:12:54.254 Feature Identifiers & Effects Log Page:May Support 00:12:54.254 NVMe-MI Commands & Effects Log Page: May Support 00:12:54.254 Data Area 4 for Telemetry Log: Not Supported 00:12:54.254 Error Log Page Entries Supported: 128 00:12:54.254 Keep Alive: Supported 00:12:54.254 Keep Alive Granularity: 10000 ms 00:12:54.254 00:12:54.254 NVM Command Set Attributes 00:12:54.254 ========================== 00:12:54.254 Submission Queue Entry Size 00:12:54.254 Max: 64 00:12:54.254 Min: 64 00:12:54.254 Completion Queue Entry Size 00:12:54.254 Max: 16 00:12:54.254 Min: 16 00:12:54.254 Number of Namespaces: 32 00:12:54.254 Compare Command: Supported 00:12:54.254 Write Uncorrectable Command: Not Supported 00:12:54.254 Dataset Management Command: Supported 00:12:54.254 Write Zeroes Command: Supported 00:12:54.254 Set Features Save Field: Not Supported 00:12:54.254 Reservations: Not Supported 00:12:54.254 Timestamp: Not Supported 00:12:54.254 Copy: Supported 00:12:54.254 Volatile Write Cache: Present 00:12:54.254 Atomic Write Unit (Normal): 1 00:12:54.254 Atomic Write Unit (PFail): 1 00:12:54.254 Atomic Compare & Write Unit: 1 00:12:54.254 Fused Compare & Write: Supported 00:12:54.254 Scatter-Gather List 00:12:54.254 SGL Command Set: Supported (Dword aligned) 00:12:54.254 SGL Keyed: Not Supported 00:12:54.254 SGL Bit Bucket Descriptor: Not Supported 00:12:54.254 SGL Metadata Pointer: Not Supported 00:12:54.254 Oversized SGL: Not Supported 00:12:54.254 SGL Metadata Address: Not Supported 00:12:54.254 SGL Offset: Not Supported 00:12:54.254 Transport SGL Data Block: Not Supported 00:12:54.254 Replay Protected Memory Block: Not Supported 00:12:54.254 00:12:54.254 Firmware Slot Information 00:12:54.254 ========================= 00:12:54.254 Active slot: 1 00:12:54.254 Slot 1 Firmware Revision: 24.09 00:12:54.254 00:12:54.254 00:12:54.254 Commands Supported and Effects 00:12:54.254 ============================== 00:12:54.254 Admin Commands 00:12:54.254 -------------- 00:12:54.254 Get Log Page (02h): Supported 00:12:54.254 Identify (06h): Supported 00:12:54.254 Abort (08h): Supported 00:12:54.255 Set Features (09h): Supported 00:12:54.255 Get Features (0Ah): Supported 00:12:54.255 Asynchronous Event Request (0Ch): Supported 00:12:54.255 Keep Alive (18h): Supported 00:12:54.255 I/O Commands 00:12:54.255 ------------ 00:12:54.255 Flush (00h): Supported LBA-Change 00:12:54.255 Write (01h): Supported LBA-Change 00:12:54.255 Read (02h): Supported 00:12:54.255 Compare (05h): Supported 00:12:54.255 Write Zeroes (08h): Supported LBA-Change 00:12:54.255 Dataset Management (09h): Supported LBA-Change 00:12:54.255 Copy (19h): Supported LBA-Change 00:12:54.255 Unknown (79h): Supported LBA-Change 00:12:54.255 Unknown (7Ah): Supported 00:12:54.255 00:12:54.255 Error Log 00:12:54.255 ========= 00:12:54.255 00:12:54.255 Arbitration 00:12:54.255 =========== 00:12:54.255 Arbitration Burst: 1 00:12:54.255 00:12:54.255 Power Management 00:12:54.255 ================ 00:12:54.255 Number of Power States: 1 00:12:54.255 Current Power State: Power State #0 00:12:54.255 Power State #0: 00:12:54.255 Max Power: 0.00 W 00:12:54.255 Non-Operational State: Operational 00:12:54.255 Entry Latency: Not Reported 00:12:54.255 Exit Latency: Not Reported 00:12:54.255 Relative Read Throughput: 0 00:12:54.255 Relative Read Latency: 0 00:12:54.255 Relative Write Throughput: 0 00:12:54.255 Relative Write Latency: 0 00:12:54.255 Idle Power: Not Reported 00:12:54.255 Active Power: Not Reported 00:12:54.255 Non-Operational Permissive Mode: Not Supported 00:12:54.255 00:12:54.255 Health Information 00:12:54.255 ================== 00:12:54.255 Critical Warnings: 00:12:54.255 Available Spare Space: OK 00:12:54.255 Temperature: OK 00:12:54.255 Device Reliability: OK 00:12:54.255 Read Only: No 00:12:54.255 Volatile Memory Backup: OK 00:12:54.255 Current Temperature: 0 Kelvin (-2[2024-06-10 14:21:31.773455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:54.255 [2024-06-10 14:21:31.781322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:54.255 [2024-06-10 14:21:31.781349] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:54.255 [2024-06-10 14:21:31.781358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.255 [2024-06-10 14:21:31.781364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.255 [2024-06-10 14:21:31.781371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.255 [2024-06-10 14:21:31.781377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.255 [2024-06-10 14:21:31.781432] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.255 [2024-06-10 14:21:31.781444] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:54.255 [2024-06-10 14:21:31.782439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.255 [2024-06-10 14:21:31.782487] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:54.255 [2024-06-10 14:21:31.782494] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:54.255 [2024-06-10 14:21:31.783451] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:54.255 [2024-06-10 14:21:31.783464] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:54.255 [2024-06-10 14:21:31.783515] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:54.255 [2024-06-10 14:21:31.786322] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.255 73 Celsius) 00:12:54.255 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:54.255 Available Spare: 0% 00:12:54.255 Available Spare Threshold: 0% 00:12:54.255 Life Percentage Used: 0% 00:12:54.255 Data Units Read: 0 00:12:54.255 Data Units Written: 0 00:12:54.255 Host Read Commands: 0 00:12:54.255 Host Write Commands: 0 00:12:54.255 Controller Busy Time: 0 minutes 00:12:54.255 Power Cycles: 0 00:12:54.255 Power On Hours: 0 hours 00:12:54.255 Unsafe Shutdowns: 0 00:12:54.255 Unrecoverable Media Errors: 0 00:12:54.255 Lifetime Error Log Entries: 0 00:12:54.255 Warning Temperature Time: 0 minutes 00:12:54.255 Critical Temperature Time: 0 minutes 00:12:54.255 00:12:54.255 Number of Queues 00:12:54.255 ================ 00:12:54.255 Number of I/O Submission Queues: 127 00:12:54.255 Number of I/O Completion Queues: 127 00:12:54.255 00:12:54.255 Active Namespaces 00:12:54.255 ================= 00:12:54.255 Namespace ID:1 00:12:54.255 Error Recovery Timeout: Unlimited 00:12:54.255 Command Set Identifier: NVM (00h) 00:12:54.255 Deallocate: Supported 00:12:54.255 Deallocated/Unwritten Error: Not Supported 00:12:54.255 Deallocated Read Value: Unknown 00:12:54.255 Deallocate in Write Zeroes: Not Supported 00:12:54.255 Deallocated Guard Field: 0xFFFF 00:12:54.255 Flush: Supported 00:12:54.255 Reservation: Supported 00:12:54.255 Namespace Sharing Capabilities: Multiple Controllers 00:12:54.255 Size (in LBAs): 131072 (0GiB) 00:12:54.255 Capacity (in LBAs): 131072 (0GiB) 00:12:54.255 Utilization (in LBAs): 131072 (0GiB) 00:12:54.255 NGUID: A43C610C538146ADA214C64566CD0D76 00:12:54.255 UUID: a43c610c-5381-46ad-a214-c64566cd0d76 00:12:54.255 Thin Provisioning: Not Supported 00:12:54.255 Per-NS Atomic Units: Yes 00:12:54.255 Atomic Boundary Size (Normal): 0 00:12:54.255 Atomic Boundary Size (PFail): 0 00:12:54.255 Atomic Boundary Offset: 0 00:12:54.255 Maximum Single Source Range Length: 65535 00:12:54.255 Maximum Copy Length: 65535 00:12:54.255 Maximum Source Range Count: 1 00:12:54.255 NGUID/EUI64 Never Reused: No 00:12:54.255 Namespace Write Protected: No 00:12:54.255 Number of LBA Formats: 1 00:12:54.255 Current LBA Format: LBA Format #00 00:12:54.255 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:54.255 00:12:54.255 14:21:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:54.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.515 [2024-06-10 14:21:31.977617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.811 Initializing NVMe Controllers 00:12:59.811 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:59.811 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:59.811 Initialization complete. Launching workers. 00:12:59.811 ======================================================== 00:12:59.811 Latency(us) 00:12:59.811 Device Information : IOPS MiB/s Average min max 00:12:59.811 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44080.04 172.19 2903.21 906.07 9782.84 00:12:59.811 ======================================================== 00:12:59.811 Total : 44080.04 172.19 2903.21 906.07 9782.84 00:12:59.811 00:12:59.811 [2024-06-10 14:21:37.083533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.811 14:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:59.811 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.811 [2024-06-10 14:21:37.279246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.104 Initializing NVMe Controllers 00:13:05.104 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:05.104 Initialization complete. Launching workers. 00:13:05.104 ======================================================== 00:13:05.104 Latency(us) 00:13:05.104 Device Information : IOPS MiB/s Average min max 00:13:05.104 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34259.76 133.83 3735.11 1204.24 8280.30 00:13:05.104 ======================================================== 00:13:05.104 Total : 34259.76 133.83 3735.11 1204.24 8280.30 00:13:05.104 00:13:05.104 [2024-06-10 14:21:42.298887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.104 14:21:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:05.104 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.104 [2024-06-10 14:21:42.518596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.392 [2024-06-10 14:21:47.661406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.393 Initializing NVMe Controllers 00:13:10.393 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.393 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:10.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:10.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:10.393 Initialization complete. Launching workers. 00:13:10.393 Starting thread on core 2 00:13:10.393 Starting thread on core 3 00:13:10.393 Starting thread on core 1 00:13:10.393 14:21:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:10.393 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.393 [2024-06-10 14:21:47.932859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.734 [2024-06-10 14:21:50.986651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.734 Initializing NVMe Controllers 00:13:13.734 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.734 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:13.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:13.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:13.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:13.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:13.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:13.734 Initialization complete. Launching workers. 00:13:13.734 Starting thread on core 1 with urgent priority queue 00:13:13.734 Starting thread on core 2 with urgent priority queue 00:13:13.734 Starting thread on core 3 with urgent priority queue 00:13:13.734 Starting thread on core 0 with urgent priority queue 00:13:13.734 SPDK bdev Controller (SPDK2 ) core 0: 12415.00 IO/s 8.05 secs/100000 ios 00:13:13.734 SPDK bdev Controller (SPDK2 ) core 1: 8407.33 IO/s 11.89 secs/100000 ios 00:13:13.734 SPDK bdev Controller (SPDK2 ) core 2: 12308.67 IO/s 8.12 secs/100000 ios 00:13:13.734 SPDK bdev Controller (SPDK2 ) core 3: 11572.67 IO/s 8.64 secs/100000 ios 00:13:13.734 ======================================================== 00:13:13.734 00:13:13.734 14:21:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.734 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.734 [2024-06-10 14:21:51.248728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.734 Initializing NVMe Controllers 00:13:13.734 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.734 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.734 Namespace ID: 1 size: 0GB 00:13:13.734 Initialization complete. 00:13:13.734 INFO: using host memory buffer for IO 00:13:13.734 Hello world! 00:13:13.734 [2024-06-10 14:21:51.260796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.734 14:21:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:13.996 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.996 [2024-06-10 14:21:51.516296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.383 Initializing NVMe Controllers 00:13:15.383 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.383 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.383 Initialization complete. Launching workers. 00:13:15.383 submit (in ns) avg, min, max = 8592.8, 3920.8, 4000043.3 00:13:15.383 complete (in ns) avg, min, max = 21108.5, 2382.5, 3999042.5 00:13:15.383 00:13:15.383 Submit histogram 00:13:15.383 ================ 00:13:15.383 Range in us Cumulative Count 00:13:15.383 3.920 - 3.947: 1.0870% ( 164) 00:13:15.383 3.947 - 3.973: 7.2977% ( 937) 00:13:15.383 3.973 - 4.000: 15.9608% ( 1307) 00:13:15.383 4.000 - 4.027: 26.8112% ( 1637) 00:13:15.383 4.027 - 4.053: 37.6085% ( 1629) 00:13:15.383 4.053 - 4.080: 48.3661% ( 1623) 00:13:15.383 4.080 - 4.107: 63.0874% ( 2221) 00:13:15.383 4.107 - 4.133: 77.8352% ( 2225) 00:13:15.383 4.133 - 4.160: 89.2755% ( 1726) 00:13:15.383 4.160 - 4.187: 95.7845% ( 982) 00:13:15.383 4.187 - 4.213: 98.4689% ( 405) 00:13:15.383 4.213 - 4.240: 99.2179% ( 113) 00:13:15.383 4.240 - 4.267: 99.3902% ( 26) 00:13:15.383 4.267 - 4.293: 99.4167% ( 4) 00:13:15.383 4.320 - 4.347: 99.4233% ( 1) 00:13:15.383 4.347 - 4.373: 99.4499% ( 4) 00:13:15.383 4.373 - 4.400: 99.4631% ( 2) 00:13:15.383 4.427 - 4.453: 99.4697% ( 1) 00:13:15.383 4.480 - 4.507: 99.4764% ( 1) 00:13:15.383 4.507 - 4.533: 99.4830% ( 1) 00:13:15.383 4.693 - 4.720: 99.4896% ( 1) 00:13:15.383 4.800 - 4.827: 99.4963% ( 1) 00:13:15.383 5.413 - 5.440: 99.5029% ( 1) 00:13:15.383 5.467 - 5.493: 99.5095% ( 1) 00:13:15.383 5.493 - 5.520: 99.5161% ( 1) 00:13:15.383 5.627 - 5.653: 99.5228% ( 1) 00:13:15.383 5.760 - 5.787: 99.5294% ( 1) 00:13:15.383 5.973 - 6.000: 99.5360% ( 1) 00:13:15.383 6.000 - 6.027: 99.5427% ( 1) 00:13:15.383 6.053 - 6.080: 99.5559% ( 2) 00:13:15.383 6.080 - 6.107: 99.5625% ( 1) 00:13:15.383 6.107 - 6.133: 99.5692% ( 1) 00:13:15.383 6.160 - 6.187: 99.5891% ( 3) 00:13:15.383 6.240 - 6.267: 99.5957% ( 1) 00:13:15.383 6.373 - 6.400: 99.6023% ( 1) 00:13:15.383 6.453 - 6.480: 99.6089% ( 1) 00:13:15.383 6.507 - 6.533: 99.6156% ( 1) 00:13:15.383 6.560 - 6.587: 99.6222% ( 1) 00:13:15.383 6.613 - 6.640: 99.6288% ( 1) 00:13:15.383 6.640 - 6.667: 99.6354% ( 1) 00:13:15.383 6.693 - 6.720: 99.6421% ( 1) 00:13:15.383 6.747 - 6.773: 99.6553% ( 2) 00:13:15.383 6.773 - 6.800: 99.6620% ( 1) 00:13:15.383 6.827 - 6.880: 99.6686% ( 1) 00:13:15.383 6.880 - 6.933: 99.6752% ( 1) 00:13:15.383 6.933 - 6.987: 99.6818% ( 1) 00:13:15.383 6.987 - 7.040: 99.7017% ( 3) 00:13:15.383 7.040 - 7.093: 99.7084% ( 1) 00:13:15.383 7.093 - 7.147: 99.7150% ( 1) 00:13:15.383 7.147 - 7.200: 99.7282% ( 2) 00:13:15.383 7.200 - 7.253: 99.7349% ( 1) 00:13:15.383 7.253 - 7.307: 99.7548% ( 3) 00:13:15.383 7.307 - 7.360: 99.7746% ( 3) 00:13:15.383 7.360 - 7.413: 99.7945% ( 3) 00:13:15.383 7.413 - 7.467: 99.8078% ( 2) 00:13:15.383 7.573 - 7.627: 99.8210% ( 2) 00:13:15.383 7.733 - 7.787: 99.8277% ( 1) 00:13:15.383 7.840 - 7.893: 99.8343% ( 1) 00:13:15.383 7.893 - 7.947: 99.8409% ( 1) 00:13:15.383 8.160 - 8.213: 99.8476% ( 1) 00:13:15.383 9.333 - 9.387: 99.8542% ( 1) 00:13:15.383 10.080 - 10.133: 99.8608% ( 1) 00:13:15.383 10.827 - 10.880: 99.8674% ( 1) 00:13:15.383 16.000 - 16.107: 99.8741% ( 1) 00:13:15.383 28.587 - 28.800: 99.8807% ( 1) 00:13:15.383 34.987 - 35.200: 99.8873% ( 1) 00:13:15.383 3986.773 - 4014.080: 100.0000% ( 17) 00:13:15.383 00:13:15.383 Complete histogram 00:13:15.383 ================== 00:13:15.383 Range in us Cumulative Count 00:13:15.383 2.373 - 2.387: 0.0066% ( 1) 00:13:15.383 2.387 - 2.400: 0.0795% ( 11) 00:13:15.383 2.400 - 2.413: 1.3654% ( 194) 00:13:15.383 2.413 - 2.427: 1.5245% ( 24) 00:13:15.383 2.427 - 2.440: 25.0746% ( 3553) 00:13:15.383 2.440 - 2.453: 50.3347% ( 3811) 00:13:15.383 2.453 - 2.467: 61.5431% ( 1691) 00:13:15.383 2.467 - 2.480: 72.7978% ( 1698) 00:13:15.383 2.480 - 2.493: 79.9563% ( 1080) 00:13:15.383 2.493 - 2.507: 82.0905% ( 322) 00:13:15.383 2.507 - [2024-06-10 14:21:52.611948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.383 2.520: 85.8222% ( 563) 00:13:15.383 2.520 - 2.533: 90.5482% ( 713) 00:13:15.383 2.533 - 2.547: 94.1075% ( 537) 00:13:15.383 2.547 - 2.560: 96.4340% ( 351) 00:13:15.383 2.560 - 2.573: 98.2965% ( 281) 00:13:15.383 2.573 - 2.587: 99.0588% ( 115) 00:13:15.383 2.587 - 2.600: 99.2576% ( 30) 00:13:15.383 2.600 - 2.613: 99.2709% ( 2) 00:13:15.383 2.613 - 2.627: 99.2842% ( 2) 00:13:15.383 2.627 - 2.640: 99.3107% ( 4) 00:13:15.383 2.640 - 2.653: 99.3173% ( 1) 00:13:15.383 2.960 - 2.973: 99.3239% ( 1) 00:13:15.383 4.507 - 4.533: 99.3305% ( 1) 00:13:15.383 4.827 - 4.853: 99.3372% ( 1) 00:13:15.383 5.067 - 5.093: 99.3504% ( 2) 00:13:15.383 5.120 - 5.147: 99.3571% ( 1) 00:13:15.383 5.333 - 5.360: 99.3637% ( 1) 00:13:15.383 5.360 - 5.387: 99.3703% ( 1) 00:13:15.383 5.387 - 5.413: 99.3769% ( 1) 00:13:15.383 5.440 - 5.467: 99.3836% ( 1) 00:13:15.383 5.520 - 5.547: 99.3968% ( 2) 00:13:15.383 5.573 - 5.600: 99.4035% ( 1) 00:13:15.383 5.600 - 5.627: 99.4233% ( 3) 00:13:15.383 5.653 - 5.680: 99.4300% ( 1) 00:13:15.383 5.680 - 5.707: 99.4366% ( 1) 00:13:15.383 5.707 - 5.733: 99.4432% ( 1) 00:13:15.383 5.733 - 5.760: 99.4499% ( 1) 00:13:15.383 5.840 - 5.867: 99.4565% ( 1) 00:13:15.383 5.893 - 5.920: 99.4631% ( 1) 00:13:15.383 6.027 - 6.053: 99.4697% ( 1) 00:13:15.383 6.080 - 6.107: 99.4764% ( 1) 00:13:15.383 6.587 - 6.613: 99.4830% ( 1) 00:13:15.383 6.880 - 6.933: 99.4896% ( 1) 00:13:15.383 7.840 - 7.893: 99.4963% ( 1) 00:13:15.383 12.000 - 12.053: 99.5029% ( 1) 00:13:15.383 12.960 - 13.013: 99.5095% ( 1) 00:13:15.383 13.760 - 13.867: 99.5161% ( 1) 00:13:15.383 16.000 - 16.107: 99.5228% ( 1) 00:13:15.383 16.427 - 16.533: 99.5294% ( 1) 00:13:15.383 1460.907 - 1467.733: 99.5360% ( 1) 00:13:15.383 3986.773 - 4014.080: 100.0000% ( 70) 00:13:15.383 00:13:15.383 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:15.383 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:15.383 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:15.383 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:15.383 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.383 [ 00:13:15.383 { 00:13:15.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:15.384 "subtype": "Discovery", 00:13:15.384 "listen_addresses": [], 00:13:15.384 "allow_any_host": true, 00:13:15.384 "hosts": [] 00:13:15.384 }, 00:13:15.384 { 00:13:15.384 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:15.384 "subtype": "NVMe", 00:13:15.384 "listen_addresses": [ 00:13:15.384 { 00:13:15.384 "trtype": "VFIOUSER", 00:13:15.384 "adrfam": "IPv4", 00:13:15.384 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:15.384 "trsvcid": "0" 00:13:15.384 } 00:13:15.384 ], 00:13:15.384 "allow_any_host": true, 00:13:15.384 "hosts": [], 00:13:15.384 "serial_number": "SPDK1", 00:13:15.384 "model_number": "SPDK bdev Controller", 00:13:15.384 "max_namespaces": 32, 00:13:15.384 "min_cntlid": 1, 00:13:15.384 "max_cntlid": 65519, 00:13:15.384 "namespaces": [ 00:13:15.384 { 00:13:15.384 "nsid": 1, 00:13:15.384 "bdev_name": "Malloc1", 00:13:15.384 "name": "Malloc1", 00:13:15.384 "nguid": "646AA5BC1CC6471E99C401DDAA811B53", 00:13:15.384 "uuid": "646aa5bc-1cc6-471e-99c4-01ddaa811b53" 00:13:15.384 }, 00:13:15.384 { 00:13:15.384 "nsid": 2, 00:13:15.384 "bdev_name": "Malloc3", 00:13:15.384 "name": "Malloc3", 00:13:15.384 "nguid": "7F05CEF2CA69411DBCB6FFAC38596E10", 00:13:15.384 "uuid": "7f05cef2-ca69-411d-bcb6-ffac38596e10" 00:13:15.384 } 00:13:15.384 ] 00:13:15.384 }, 00:13:15.384 { 00:13:15.384 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:15.384 "subtype": "NVMe", 00:13:15.384 "listen_addresses": [ 00:13:15.384 { 00:13:15.384 "trtype": "VFIOUSER", 00:13:15.384 "adrfam": "IPv4", 00:13:15.384 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:15.384 "trsvcid": "0" 00:13:15.384 } 00:13:15.384 ], 00:13:15.384 "allow_any_host": true, 00:13:15.384 "hosts": [], 00:13:15.384 "serial_number": "SPDK2", 00:13:15.384 "model_number": "SPDK bdev Controller", 00:13:15.384 "max_namespaces": 32, 00:13:15.384 "min_cntlid": 1, 00:13:15.384 "max_cntlid": 65519, 00:13:15.384 "namespaces": [ 00:13:15.384 { 00:13:15.384 "nsid": 1, 00:13:15.384 "bdev_name": "Malloc2", 00:13:15.384 "name": "Malloc2", 00:13:15.384 "nguid": "A43C610C538146ADA214C64566CD0D76", 00:13:15.384 "uuid": "a43c610c-5381-46ad-a214-c64566cd0d76" 00:13:15.384 } 00:13:15.384 ] 00:13:15.384 } 00:13:15.384 ] 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2946045 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:15.384 14:21:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:15.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.644 [2024-06-10 14:21:53.029726] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.644 Malloc4 00:13:15.644 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:15.905 [2024-06-10 14:21:53.301460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.906 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:15.906 Asynchronous Event Request test 00:13:15.906 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.906 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:15.906 Registering asynchronous event callbacks... 00:13:15.906 Starting namespace attribute notice tests for all controllers... 00:13:15.906 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:15.906 aer_cb - Changed Namespace 00:13:15.906 Cleaning up... 00:13:16.167 [ 00:13:16.167 { 00:13:16.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.167 "subtype": "Discovery", 00:13:16.167 "listen_addresses": [], 00:13:16.167 "allow_any_host": true, 00:13:16.167 "hosts": [] 00:13:16.167 }, 00:13:16.167 { 00:13:16.167 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.167 "subtype": "NVMe", 00:13:16.167 "listen_addresses": [ 00:13:16.167 { 00:13:16.167 "trtype": "VFIOUSER", 00:13:16.167 "adrfam": "IPv4", 00:13:16.167 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.167 "trsvcid": "0" 00:13:16.167 } 00:13:16.167 ], 00:13:16.167 "allow_any_host": true, 00:13:16.167 "hosts": [], 00:13:16.167 "serial_number": "SPDK1", 00:13:16.167 "model_number": "SPDK bdev Controller", 00:13:16.167 "max_namespaces": 32, 00:13:16.167 "min_cntlid": 1, 00:13:16.167 "max_cntlid": 65519, 00:13:16.167 "namespaces": [ 00:13:16.167 { 00:13:16.167 "nsid": 1, 00:13:16.167 "bdev_name": "Malloc1", 00:13:16.167 "name": "Malloc1", 00:13:16.167 "nguid": "646AA5BC1CC6471E99C401DDAA811B53", 00:13:16.167 "uuid": "646aa5bc-1cc6-471e-99c4-01ddaa811b53" 00:13:16.167 }, 00:13:16.167 { 00:13:16.167 "nsid": 2, 00:13:16.167 "bdev_name": "Malloc3", 00:13:16.167 "name": "Malloc3", 00:13:16.167 "nguid": "7F05CEF2CA69411DBCB6FFAC38596E10", 00:13:16.167 "uuid": "7f05cef2-ca69-411d-bcb6-ffac38596e10" 00:13:16.167 } 00:13:16.167 ] 00:13:16.167 }, 00:13:16.167 { 00:13:16.167 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.167 "subtype": "NVMe", 00:13:16.167 "listen_addresses": [ 00:13:16.167 { 00:13:16.167 "trtype": "VFIOUSER", 00:13:16.167 "adrfam": "IPv4", 00:13:16.167 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.167 "trsvcid": "0" 00:13:16.167 } 00:13:16.167 ], 00:13:16.167 "allow_any_host": true, 00:13:16.167 "hosts": [], 00:13:16.167 "serial_number": "SPDK2", 00:13:16.167 "model_number": "SPDK bdev Controller", 00:13:16.167 "max_namespaces": 32, 00:13:16.167 "min_cntlid": 1, 00:13:16.167 "max_cntlid": 65519, 00:13:16.167 "namespaces": [ 00:13:16.167 { 00:13:16.167 "nsid": 1, 00:13:16.167 "bdev_name": "Malloc2", 00:13:16.167 "name": "Malloc2", 00:13:16.167 "nguid": "A43C610C538146ADA214C64566CD0D76", 00:13:16.167 "uuid": "a43c610c-5381-46ad-a214-c64566cd0d76" 00:13:16.167 }, 00:13:16.167 { 00:13:16.167 "nsid": 2, 00:13:16.167 "bdev_name": "Malloc4", 00:13:16.167 "name": "Malloc4", 00:13:16.167 "nguid": "67F96BD6FDE14679B0459FB0E945B704", 00:13:16.167 "uuid": "67f96bd6-fde1-4679-b045-9fb0e945b704" 00:13:16.167 } 00:13:16.167 ] 00:13:16.167 } 00:13:16.167 ] 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2946045 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2936256 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2936256 ']' 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2936256 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2936256 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2936256' 00:13:16.167 killing process with pid 2936256 00:13:16.167 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2936256 00:13:16.168 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2936256 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2946248 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2946248' 00:13:16.429 Process pid: 2946248 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2946248 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2946248 ']' 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:16.429 14:21:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:16.429 [2024-06-10 14:21:53.826848] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:16.429 [2024-06-10 14:21:53.827759] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:13:16.429 [2024-06-10 14:21:53.827801] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.429 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.429 [2024-06-10 14:21:53.904113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.429 [2024-06-10 14:21:53.973743] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.429 [2024-06-10 14:21:53.973782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.429 [2024-06-10 14:21:53.973789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.429 [2024-06-10 14:21:53.973795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.429 [2024-06-10 14:21:53.973801] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.429 [2024-06-10 14:21:53.973941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.429 [2024-06-10 14:21:53.974080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.429 [2024-06-10 14:21:53.974226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.429 [2024-06-10 14:21:53.974228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.691 [2024-06-10 14:21:54.042131] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:16.691 [2024-06-10 14:21:54.042206] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:16.691 [2024-06-10 14:21:54.042584] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:16.691 [2024-06-10 14:21:54.043217] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:16.691 [2024-06-10 14:21:54.043219] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:17.261 14:21:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:17.261 14:21:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:13:17.261 14:21:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:18.200 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:18.461 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:18.461 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:18.461 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.461 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:18.461 14:21:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:18.721 Malloc1 00:13:18.721 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:18.982 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:18.982 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:19.243 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:19.243 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:19.243 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:19.503 Malloc2 00:13:19.503 14:21:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:19.764 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:20.024 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2946248 ']' 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2946248' 00:13:20.283 killing process with pid 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2946248 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:20.283 00:13:20.283 real 0m52.062s 00:13:20.283 user 3m26.962s 00:13:20.283 sys 0m3.198s 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:20.283 14:21:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:20.283 ************************************ 00:13:20.283 END TEST nvmf_vfio_user 00:13:20.283 ************************************ 00:13:20.545 14:21:57 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:20.545 14:21:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:20.545 14:21:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:20.545 14:21:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.545 ************************************ 00:13:20.545 START TEST nvmf_vfio_user_nvme_compliance 00:13:20.545 ************************************ 00:13:20.545 14:21:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:20.545 * Looking for test storage... 00:13:20.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.545 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2947141 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2947141' 00:13:20.546 Process pid: 2947141 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2947141 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 2947141 ']' 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:20.546 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.546 [2024-06-10 14:21:58.126032] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:13:20.546 [2024-06-10 14:21:58.126112] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.807 [2024-06-10 14:21:58.207670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.807 [2024-06-10 14:21:58.278875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.807 [2024-06-10 14:21:58.278910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.807 [2024-06-10 14:21:58.278917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.807 [2024-06-10 14:21:58.278923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.807 [2024-06-10 14:21:58.278929] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.807 [2024-06-10 14:21:58.279037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.807 [2024-06-10 14:21:58.279170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.807 [2024-06-10 14:21:58.279173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.748 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:21.748 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:13:21.748 14:21:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.687 14:21:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.687 malloc0 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.687 14:22:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:22.687 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.687 00:13:22.687 00:13:22.687 CUnit - A unit testing framework for C - Version 2.1-3 00:13:22.687 http://cunit.sourceforge.net/ 00:13:22.687 00:13:22.687 00:13:22.687 Suite: nvme_compliance 00:13:22.687 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 14:22:00.228081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.687 [2024-06-10 14:22:00.229435] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:22.687 [2024-06-10 14:22:00.229450] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:22.687 [2024-06-10 14:22:00.229456] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:22.687 [2024-06-10 14:22:00.231109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.687 passed 00:13:22.947 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 14:22:00.326759] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.947 [2024-06-10 14:22:00.329782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.947 passed 00:13:22.947 Test: admin_identify_ns ...[2024-06-10 14:22:00.423578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.947 [2024-06-10 14:22:00.487329] ctrlr.c:2710:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:22.947 [2024-06-10 14:22:00.495326] ctrlr.c:2710:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:22.947 [2024-06-10 14:22:00.516437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.208 passed 00:13:23.208 Test: admin_get_features_mandatory_features ...[2024-06-10 14:22:00.608123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.208 [2024-06-10 14:22:00.611150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.208 passed 00:13:23.208 Test: admin_get_features_optional_features ...[2024-06-10 14:22:00.704691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.208 [2024-06-10 14:22:00.707710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.208 passed 00:13:23.208 Test: admin_set_features_number_of_queues ...[2024-06-10 14:22:00.801868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.468 [2024-06-10 14:22:00.906434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.468 passed 00:13:23.468 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 14:22:00.998054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.468 [2024-06-10 14:22:01.001076] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.468 passed 00:13:23.728 Test: admin_get_log_page_with_lpo ...[2024-06-10 14:22:01.095173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.728 [2024-06-10 14:22:01.162324] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:23.728 [2024-06-10 14:22:01.175383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.728 passed 00:13:23.728 Test: fabric_property_get ...[2024-06-10 14:22:01.267006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.728 [2024-06-10 14:22:01.268262] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:23.728 [2024-06-10 14:22:01.270027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.728 passed 00:13:23.988 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 14:22:01.363653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.988 [2024-06-10 14:22:01.364887] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:23.988 [2024-06-10 14:22:01.366672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.988 passed 00:13:23.988 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 14:22:01.460792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.988 [2024-06-10 14:22:01.544323] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.988 [2024-06-10 14:22:01.560325] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:23.988 [2024-06-10 14:22:01.565409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.248 passed 00:13:24.248 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 14:22:01.659001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.248 [2024-06-10 14:22:01.660225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:24.248 [2024-06-10 14:22:01.662023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.248 passed 00:13:24.248 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 14:22:01.755157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.248 [2024-06-10 14:22:01.830323] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:24.509 [2024-06-10 14:22:01.854325] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:24.509 [2024-06-10 14:22:01.859408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.509 passed 00:13:24.509 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 14:22:01.953018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.509 [2024-06-10 14:22:01.954252] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:24.509 [2024-06-10 14:22:01.954271] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:24.509 [2024-06-10 14:22:01.956036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.509 passed 00:13:24.509 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 14:22:02.049167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.769 [2024-06-10 14:22:02.140328] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:24.769 [2024-06-10 14:22:02.148320] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:24.769 [2024-06-10 14:22:02.156324] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:24.769 [2024-06-10 14:22:02.164323] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:24.769 [2024-06-10 14:22:02.193406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.769 passed 00:13:24.769 Test: admin_create_io_sq_verify_pc ...[2024-06-10 14:22:02.287026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.769 [2024-06-10 14:22:02.302330] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:24.769 [2024-06-10 14:22:02.320161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.769 passed 00:13:25.028 Test: admin_create_io_qp_max_qps ...[2024-06-10 14:22:02.413729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.969 [2024-06-10 14:22:03.529323] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:26.540 [2024-06-10 14:22:03.913375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.540 passed 00:13:26.540 Test: admin_create_io_sq_shared_cq ...[2024-06-10 14:22:04.005578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.801 [2024-06-10 14:22:04.137322] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:26.801 [2024-06-10 14:22:04.174388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.801 passed 00:13:26.801 00:13:26.801 Run Summary: Type Total Ran Passed Failed Inactive 00:13:26.801 suites 1 1 n/a 0 0 00:13:26.801 tests 18 18 18 0 0 00:13:26.801 asserts 360 360 360 0 n/a 00:13:26.801 00:13:26.801 Elapsed time = 1.655 seconds 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2947141 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 2947141 ']' 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 2947141 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2947141 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2947141' 00:13:26.801 killing process with pid 2947141 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 2947141 00:13:26.801 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 2947141 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:27.062 00:13:27.062 real 0m6.494s 00:13:27.062 user 0m18.627s 00:13:27.062 sys 0m0.476s 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:27.062 ************************************ 00:13:27.062 END TEST nvmf_vfio_user_nvme_compliance 00:13:27.062 ************************************ 00:13:27.062 14:22:04 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:27.062 14:22:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:27.062 14:22:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:27.062 14:22:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.062 ************************************ 00:13:27.062 START TEST nvmf_vfio_user_fuzz 00:13:27.062 ************************************ 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:27.062 * Looking for test storage... 00:13:27.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2948532 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2948532' 00:13:27.062 Process pid: 2948532 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2948532 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2948532 ']' 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:27.062 14:22:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:28.005 14:22:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:28.005 14:22:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:13:28.005 14:22:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:28.946 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.206 malloc0 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:29.206 14:22:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:01.365 Fuzzing completed. Shutting down the fuzz application 00:14:01.365 00:14:01.365 Dumping successful admin opcodes: 00:14:01.365 8, 9, 10, 24, 00:14:01.365 Dumping successful io opcodes: 00:14:01.365 0, 00:14:01.365 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1014202, total successful commands: 3978, random_seed: 2087611584 00:14:01.365 NS: 0x200003a1ef00 admin qp, Total commands completed: 248482, total successful commands: 2010, random_seed: 1761182656 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2948532 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2948532 ']' 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 2948532 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:01.365 14:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2948532 00:14:01.365 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:01.365 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2948532' 00:14:01.366 killing process with pid 2948532 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 2948532 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 2948532 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:01.366 00:14:01.366 real 0m32.730s 00:14:01.366 user 0m39.316s 00:14:01.366 sys 0m22.452s 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:01.366 14:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:01.366 ************************************ 00:14:01.366 END TEST nvmf_vfio_user_fuzz 00:14:01.366 ************************************ 00:14:01.366 14:22:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.366 14:22:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:01.366 14:22:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:01.366 14:22:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.366 ************************************ 00:14:01.366 START TEST nvmf_host_management 00:14:01.366 ************************************ 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:01.366 * Looking for test storage... 00:14:01.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.366 14:22:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.960 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.960 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.960 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.960 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.960 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:14:07.961 00:14:07.961 --- 10.0.0.2 ping statistics --- 00:14:07.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.961 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:14:07.961 00:14:07.961 --- 10.0.0.1 ping statistics --- 00:14:07.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.961 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2958511 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2958511 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2958511 ']' 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:07.961 14:22:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.961 [2024-06-10 14:22:44.709202] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:07.961 [2024-06-10 14:22:44.709257] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.961 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.961 [2024-06-10 14:22:44.781113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.961 [2024-06-10 14:22:44.859564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.961 [2024-06-10 14:22:44.859600] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.961 [2024-06-10 14:22:44.859608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.961 [2024-06-10 14:22:44.859614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.961 [2024-06-10 14:22:44.859620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.961 [2024-06-10 14:22:44.859730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.961 [2024-06-10 14:22:44.859888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.961 [2024-06-10 14:22:44.860036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.961 [2024-06-10 14:22:44.860037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 [2024-06-10 14:22:45.631206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 Malloc0 00:14:08.223 [2024-06-10 14:22:45.690236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2958880 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2958880 /var/tmp/bdevperf.sock 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2958880 ']' 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:08.223 { 00:14:08.223 "params": { 00:14:08.223 "name": "Nvme$subsystem", 00:14:08.223 "trtype": "$TEST_TRANSPORT", 00:14:08.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.223 "adrfam": "ipv4", 00:14:08.223 "trsvcid": "$NVMF_PORT", 00:14:08.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.223 "hdgst": ${hdgst:-false}, 00:14:08.223 "ddgst": ${ddgst:-false} 00:14:08.223 }, 00:14:08.223 "method": "bdev_nvme_attach_controller" 00:14:08.223 } 00:14:08.223 EOF 00:14:08.223 )") 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:08.223 14:22:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:08.223 "params": { 00:14:08.223 "name": "Nvme0", 00:14:08.223 "trtype": "tcp", 00:14:08.223 "traddr": "10.0.0.2", 00:14:08.223 "adrfam": "ipv4", 00:14:08.223 "trsvcid": "4420", 00:14:08.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.223 "hdgst": false, 00:14:08.223 "ddgst": false 00:14:08.223 }, 00:14:08.223 "method": "bdev_nvme_attach_controller" 00:14:08.223 }' 00:14:08.223 [2024-06-10 14:22:45.790538] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:08.223 [2024-06-10 14:22:45.790588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958880 ] 00:14:08.223 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.484 [2024-06-10 14:22:45.866034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.484 [2024-06-10 14:22:45.930936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.745 Running I/O for 10 seconds... 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=786 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 786 -ge 100 ']' 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.321 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:09.321 [2024-06-10 14:22:46.737675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737764] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737790] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.737999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.738005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13da180 is same with the state(5) to be set 00:14:09.321 [2024-06-10 14:22:46.738260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.321 [2024-06-10 14:22:46.738297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.321 [2024-06-10 14:22:46.738322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.321 [2024-06-10 14:22:46.738330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.321 [2024-06-10 14:22:46.738340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.321 [2024-06-10 14:22:46.738347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.321 [2024-06-10 14:22:46.738356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.321 [2024-06-10 14:22:46.738363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.321 [2024-06-10 14:22:46.738372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.738991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.738998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.739010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.739018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.322 [2024-06-10 14:22:46.739027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.322 [2024-06-10 14:22:46.739035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:09.323 [2024-06-10 14:22:46.739381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.323 [2024-06-10 14:22:46.739435] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a6f00 was disconnected and freed. reset controller. 00:14:09.323 [2024-06-10 14:22:46.740641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:09.323 task offset: 109824 on job bdev=Nvme0n1 fails 00:14:09.323 00:14:09.323 Latency(us) 00:14:09.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.323 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:09.323 Job: Nvme0n1 ended in about 0.54 seconds with error 00:14:09.323 Verification LBA range: start 0x0 length 0x400 00:14:09.323 Nvme0n1 : 0.54 1578.91 98.68 117.77 0.00 36755.08 3099.31 36700.16 00:14:09.323 =================================================================================================================== 00:14:09.323 Total : 1578.91 98.68 117.77 0.00 36755.08 3099.31 36700.16 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.323 [2024-06-10 14:22:46.742660] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.323 [2024-06-10 14:22:46.742684] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4d510 (9): Bad file descriptor 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.323 14:22:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:09.323 [2024-06-10 14:22:46.763553] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2958880 00:14:10.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2958880) - No such process 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:10.269 { 00:14:10.269 "params": { 00:14:10.269 "name": "Nvme$subsystem", 00:14:10.269 "trtype": "$TEST_TRANSPORT", 00:14:10.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.269 "adrfam": "ipv4", 00:14:10.269 "trsvcid": "$NVMF_PORT", 00:14:10.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.269 "hdgst": ${hdgst:-false}, 00:14:10.269 "ddgst": ${ddgst:-false} 00:14:10.269 }, 00:14:10.269 "method": "bdev_nvme_attach_controller" 00:14:10.269 } 00:14:10.269 EOF 00:14:10.269 )") 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:10.269 14:22:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:10.269 "params": { 00:14:10.269 "name": "Nvme0", 00:14:10.269 "trtype": "tcp", 00:14:10.269 "traddr": "10.0.0.2", 00:14:10.269 "adrfam": "ipv4", 00:14:10.269 "trsvcid": "4420", 00:14:10.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:10.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:10.269 "hdgst": false, 00:14:10.269 "ddgst": false 00:14:10.269 }, 00:14:10.269 "method": "bdev_nvme_attach_controller" 00:14:10.269 }' 00:14:10.269 [2024-06-10 14:22:47.812464] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:10.269 [2024-06-10 14:22:47.812519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959234 ] 00:14:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.530 [2024-06-10 14:22:47.888552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.530 [2024-06-10 14:22:47.952179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.791 Running I/O for 1 seconds... 00:14:11.734 00:14:11.734 Latency(us) 00:14:11.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.734 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:11.734 Verification LBA range: start 0x0 length 0x400 00:14:11.734 Nvme0n1 : 1.03 1613.08 100.82 0.00 0.00 38997.22 8246.61 35389.44 00:14:11.734 =================================================================================================================== 00:14:11.734 Total : 1613.08 100.82 0.00 0.00 38997.22 8246.61 35389.44 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.734 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.994 rmmod nvme_tcp 00:14:11.994 rmmod nvme_fabrics 00:14:11.994 rmmod nvme_keyring 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2958511 ']' 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2958511 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2958511 ']' 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2958511 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2958511 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2958511' 00:14:11.994 killing process with pid 2958511 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2958511 00:14:11.994 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2958511 00:14:11.995 [2024-06-10 14:22:49.555104] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.995 14:22:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.662 14:22:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.662 14:22:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:14.662 00:14:14.662 real 0m14.349s 00:14:14.662 user 0m23.632s 00:14:14.662 sys 0m6.343s 00:14:14.662 14:22:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:14.662 14:22:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.662 ************************************ 00:14:14.662 END TEST nvmf_host_management 00:14:14.662 ************************************ 00:14:14.662 14:22:51 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:14.662 14:22:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:14.662 14:22:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:14.662 14:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.662 ************************************ 00:14:14.662 START TEST nvmf_lvol 00:14:14.662 ************************************ 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:14.663 * Looking for test storage... 00:14:14.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.663 14:22:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:21.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:21.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:21.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:21.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.254 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:14:21.516 00:14:21.516 --- 10.0.0.2 ping statistics --- 00:14:21.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.516 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:14:21.516 00:14:21.516 --- 10.0.0.1 ping statistics --- 00:14:21.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.516 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2963724 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2963724 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2963724 ']' 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:21.516 14:22:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:21.516 [2024-06-10 14:22:59.048009] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:21.516 [2024-06-10 14:22:59.048074] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.516 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.777 [2024-06-10 14:22:59.135254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.777 [2024-06-10 14:22:59.231151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.777 [2024-06-10 14:22:59.231203] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.777 [2024-06-10 14:22:59.231212] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.777 [2024-06-10 14:22:59.231218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.777 [2024-06-10 14:22:59.231224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.777 [2024-06-10 14:22:59.231357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.777 [2024-06-10 14:22:59.231443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.777 [2024-06-10 14:22:59.231628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.349 14:22:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:22.349 14:22:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:22.349 14:22:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.349 14:22:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:22.349 14:22:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:22.610 14:22:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.610 14:22:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.610 [2024-06-10 14:23:00.159076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.610 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.871 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:22.871 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:23.132 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:23.132 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:23.392 14:23:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:23.653 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a7e64a23-fe99-495e-b13e-ba525328eb87 00:14:23.653 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a7e64a23-fe99-495e-b13e-ba525328eb87 lvol 20 00:14:23.914 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fdd4fc61-e628-4ccb-8fa9-e3b7222ca29c 00:14:23.914 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:23.914 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdd4fc61-e628-4ccb-8fa9-e3b7222ca29c 00:14:24.174 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:24.434 [2024-06-10 14:23:01.872233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.434 14:23:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.694 14:23:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2964288 00:14:24.694 14:23:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:24.694 14:23:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:24.694 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.634 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fdd4fc61-e628-4ccb-8fa9-e3b7222ca29c MY_SNAPSHOT 00:14:25.893 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b188d79a-9022-4f46-a322-96b4d5d6c637 00:14:25.893 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fdd4fc61-e628-4ccb-8fa9-e3b7222ca29c 30 00:14:26.154 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b188d79a-9022-4f46-a322-96b4d5d6c637 MY_CLONE 00:14:26.413 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9b2d4241-4051-44bb-9360-6b8ac85d7812 00:14:26.413 14:23:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9b2d4241-4051-44bb-9360-6b8ac85d7812 00:14:26.983 14:23:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2964288 00:14:35.115 Initializing NVMe Controllers 00:14:35.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:35.115 Controller IO queue size 128, less than required. 00:14:35.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:35.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:35.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:35.115 Initialization complete. Launching workers. 00:14:35.115 ======================================================== 00:14:35.115 Latency(us) 00:14:35.115 Device Information : IOPS MiB/s Average min max 00:14:35.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12293.30 48.02 10415.51 1465.23 58110.59 00:14:35.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12374.90 48.34 10342.59 3672.69 60469.56 00:14:35.115 ======================================================== 00:14:35.115 Total : 24668.20 96.36 10378.93 1465.23 60469.56 00:14:35.115 00:14:35.115 14:23:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.115 14:23:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdd4fc61-e628-4ccb-8fa9-e3b7222ca29c 00:14:35.375 14:23:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7e64a23-fe99-495e-b13e-ba525328eb87 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.635 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.635 rmmod nvme_tcp 00:14:35.635 rmmod nvme_fabrics 00:14:35.635 rmmod nvme_keyring 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2963724 ']' 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2963724 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2963724 ']' 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2963724 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2963724 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2963724' 00:14:35.636 killing process with pid 2963724 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2963724 00:14:35.636 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2963724 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.896 14:23:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.808 14:23:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.808 00:14:37.808 real 0m23.623s 00:14:37.808 user 1m6.034s 00:14:37.808 sys 0m7.677s 00:14:37.808 14:23:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:37.808 14:23:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:37.808 ************************************ 00:14:37.808 END TEST nvmf_lvol 00:14:37.808 ************************************ 00:14:38.069 14:23:15 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:38.069 14:23:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:38.069 14:23:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:38.069 14:23:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.069 ************************************ 00:14:38.069 START TEST nvmf_lvs_grow 00:14:38.069 ************************************ 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:38.069 * Looking for test storage... 00:14:38.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.069 14:23:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.215 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.215 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.215 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:46.215 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:46.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.767 ms 00:14:46.216 00:14:46.216 --- 10.0.0.2 ping statistics --- 00:14:46.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.216 rtt min/avg/max/mdev = 0.767/0.767/0.767/0.000 ms 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:14:46.216 00:14:46.216 --- 10.0.0.1 ping statistics --- 00:14:46.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.216 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2970620 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2970620 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2970620 ']' 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:46.216 14:23:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.216 [2024-06-10 14:23:22.774908] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:46.216 [2024-06-10 14:23:22.774971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.216 [2024-06-10 14:23:22.862055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.216 [2024-06-10 14:23:22.957123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.216 [2024-06-10 14:23:22.957177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.216 [2024-06-10 14:23:22.957186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.216 [2024-06-10 14:23:22.957193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.216 [2024-06-10 14:23:22.957199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.216 [2024-06-10 14:23:22.957223] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.216 14:23:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:46.476 [2024-06-10 14:23:23.898345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:46.477 ************************************ 00:14:46.477 START TEST lvs_grow_clean 00:14:46.477 ************************************ 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.477 14:23:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.737 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:46.737 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:46.997 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1823235c-dcde-430e-acaf-f92d2100ac7f 00:14:46.997 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:14:46.997 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:47.258 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:47.258 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:47.258 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1823235c-dcde-430e-acaf-f92d2100ac7f lvol 150 00:14:47.518 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9e6fa367-f86d-4c37-b1f1-2739b5d66f04 00:14:47.518 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.518 14:23:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:47.518 [2024-06-10 14:23:25.060500] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:47.518 [2024-06-10 14:23:25.060570] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:47.518 true 00:14:47.518 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:14:47.518 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:47.779 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:47.779 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:48.040 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9e6fa367-f86d-4c37-b1f1-2739b5d66f04 00:14:48.300 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:48.561 [2024-06-10 14:23:25.911091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.561 14:23:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2971336 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2971336 /var/tmp/bdevperf.sock 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2971336 ']' 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:48.561 14:23:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:48.821 [2024-06-10 14:23:26.194250] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:14:48.821 [2024-06-10 14:23:26.194322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971336 ] 00:14:48.821 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.821 [2024-06-10 14:23:26.257023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.821 [2024-06-10 14:23:26.330699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.762 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:49.762 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:49.762 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:50.022 Nvme0n1 00:14:50.022 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:50.022 [ 00:14:50.022 { 00:14:50.022 "name": "Nvme0n1", 00:14:50.022 "aliases": [ 00:14:50.022 "9e6fa367-f86d-4c37-b1f1-2739b5d66f04" 00:14:50.022 ], 00:14:50.022 "product_name": "NVMe disk", 00:14:50.022 "block_size": 4096, 00:14:50.022 "num_blocks": 38912, 00:14:50.022 "uuid": "9e6fa367-f86d-4c37-b1f1-2739b5d66f04", 00:14:50.022 "assigned_rate_limits": { 00:14:50.022 "rw_ios_per_sec": 0, 00:14:50.022 "rw_mbytes_per_sec": 0, 00:14:50.022 "r_mbytes_per_sec": 0, 00:14:50.022 "w_mbytes_per_sec": 0 00:14:50.022 }, 00:14:50.022 "claimed": false, 00:14:50.022 "zoned": false, 00:14:50.022 "supported_io_types": { 00:14:50.022 "read": true, 00:14:50.022 "write": true, 00:14:50.022 "unmap": true, 00:14:50.022 "write_zeroes": true, 00:14:50.022 "flush": true, 00:14:50.022 "reset": true, 00:14:50.022 "compare": true, 00:14:50.022 "compare_and_write": true, 00:14:50.022 "abort": true, 00:14:50.022 "nvme_admin": true, 00:14:50.022 "nvme_io": true 00:14:50.022 }, 00:14:50.022 "memory_domains": [ 00:14:50.022 { 00:14:50.022 "dma_device_id": "system", 00:14:50.022 "dma_device_type": 1 00:14:50.022 } 00:14:50.022 ], 00:14:50.022 "driver_specific": { 00:14:50.022 "nvme": [ 00:14:50.022 { 00:14:50.022 "trid": { 00:14:50.022 "trtype": "TCP", 00:14:50.022 "adrfam": "IPv4", 00:14:50.022 "traddr": "10.0.0.2", 00:14:50.022 "trsvcid": "4420", 00:14:50.022 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:50.022 }, 00:14:50.022 "ctrlr_data": { 00:14:50.022 "cntlid": 1, 00:14:50.022 "vendor_id": "0x8086", 00:14:50.022 "model_number": "SPDK bdev Controller", 00:14:50.022 "serial_number": "SPDK0", 00:14:50.022 "firmware_revision": "24.09", 00:14:50.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.022 "oacs": { 00:14:50.023 "security": 0, 00:14:50.023 "format": 0, 00:14:50.023 "firmware": 0, 00:14:50.023 "ns_manage": 0 00:14:50.023 }, 00:14:50.023 "multi_ctrlr": true, 00:14:50.023 "ana_reporting": false 00:14:50.023 }, 00:14:50.023 "vs": { 00:14:50.023 "nvme_version": "1.3" 00:14:50.023 }, 00:14:50.023 "ns_data": { 00:14:50.023 "id": 1, 00:14:50.023 "can_share": true 00:14:50.023 } 00:14:50.023 } 00:14:50.023 ], 00:14:50.023 "mp_policy": "active_passive" 00:14:50.023 } 00:14:50.023 } 00:14:50.023 ] 00:14:50.283 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:50.283 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2971670 00:14:50.283 14:23:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:50.283 Running I/O for 10 seconds... 00:14:51.224 Latency(us) 00:14:51.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.224 Nvme0n1 : 1.00 18060.00 70.55 0.00 0.00 0.00 0.00 0.00 00:14:51.224 =================================================================================================================== 00:14:51.224 Total : 18060.00 70.55 0.00 0.00 0.00 0.00 0.00 00:14:51.224 00:14:52.163 14:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:14:52.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.163 Nvme0n1 : 2.00 18196.50 71.08 0.00 0.00 0.00 0.00 0.00 00:14:52.163 =================================================================================================================== 00:14:52.163 Total : 18196.50 71.08 0.00 0.00 0.00 0.00 0.00 00:14:52.163 00:14:52.424 true 00:14:52.424 14:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:14:52.424 14:23:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:52.684 14:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:52.684 14:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:52.684 14:23:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2971670 00:14:53.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.255 Nvme0n1 : 3.00 18199.67 71.09 0.00 0.00 0.00 0.00 0.00 00:14:53.255 =================================================================================================================== 00:14:53.255 Total : 18199.67 71.09 0.00 0.00 0.00 0.00 0.00 00:14:53.255 00:14:54.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.197 Nvme0n1 : 4.00 18253.00 71.30 0.00 0.00 0.00 0.00 0.00 00:14:54.197 =================================================================================================================== 00:14:54.197 Total : 18253.00 71.30 0.00 0.00 0.00 0.00 0.00 00:14:54.197 00:14:55.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.139 Nvme0n1 : 5.00 18272.80 71.38 0.00 0.00 0.00 0.00 0.00 00:14:55.139 =================================================================================================================== 00:14:55.139 Total : 18272.80 71.38 0.00 0.00 0.00 0.00 0.00 00:14:55.139 00:14:56.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.166 Nvme0n1 : 6.00 18296.00 71.47 0.00 0.00 0.00 0.00 0.00 00:14:56.166 =================================================================================================================== 00:14:56.166 Total : 18296.00 71.47 0.00 0.00 0.00 0.00 0.00 00:14:56.166 00:14:57.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.551 Nvme0n1 : 7.00 18313.29 71.54 0.00 0.00 0.00 0.00 0.00 00:14:57.551 =================================================================================================================== 00:14:57.551 Total : 18313.29 71.54 0.00 0.00 0.00 0.00 0.00 00:14:57.551 00:14:58.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.493 Nvme0n1 : 8.00 18331.62 71.61 0.00 0.00 0.00 0.00 0.00 00:14:58.493 =================================================================================================================== 00:14:58.493 Total : 18331.62 71.61 0.00 0.00 0.00 0.00 0.00 00:14:58.493 00:14:59.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.436 Nvme0n1 : 9.00 18338.89 71.64 0.00 0.00 0.00 0.00 0.00 00:14:59.436 =================================================================================================================== 00:14:59.436 Total : 18338.89 71.64 0.00 0.00 0.00 0.00 0.00 00:14:59.436 00:15:00.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.379 Nvme0n1 : 10.00 18349.60 71.68 0.00 0.00 0.00 0.00 0.00 00:15:00.379 =================================================================================================================== 00:15:00.379 Total : 18349.60 71.68 0.00 0.00 0.00 0.00 0.00 00:15:00.379 00:15:00.379 00:15:00.379 Latency(us) 00:15:00.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.379 Nvme0n1 : 10.01 18352.97 71.69 0.00 0.00 6970.68 4314.45 13871.79 00:15:00.379 =================================================================================================================== 00:15:00.379 Total : 18352.97 71.69 0.00 0.00 6970.68 4314.45 13871.79 00:15:00.379 0 00:15:00.379 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2971336 00:15:00.379 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2971336 ']' 00:15:00.379 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2971336 00:15:00.379 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2971336 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2971336' 00:15:00.380 killing process with pid 2971336 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2971336 00:15:00.380 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.380 00:15:00.380 Latency(us) 00:15:00.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.380 =================================================================================================================== 00:15:00.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2971336 00:15:00.380 14:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.641 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:00.901 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:00.901 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:01.162 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:01.162 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:01.162 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.423 [2024-06-10 14:23:38.770757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:01.423 14:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:01.684 request: 00:15:01.684 { 00:15:01.684 "uuid": "1823235c-dcde-430e-acaf-f92d2100ac7f", 00:15:01.685 "method": "bdev_lvol_get_lvstores", 00:15:01.685 "req_id": 1 00:15:01.685 } 00:15:01.685 Got JSON-RPC error response 00:15:01.685 response: 00:15:01.685 { 00:15:01.685 "code": -19, 00:15:01.685 "message": "No such device" 00:15:01.685 } 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.685 aio_bdev 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9e6fa367-f86d-4c37-b1f1-2739b5d66f04 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=9e6fa367-f86d-4c37-b1f1-2739b5d66f04 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:01.685 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.945 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9e6fa367-f86d-4c37-b1f1-2739b5d66f04 -t 2000 00:15:02.206 [ 00:15:02.206 { 00:15:02.206 "name": "9e6fa367-f86d-4c37-b1f1-2739b5d66f04", 00:15:02.206 "aliases": [ 00:15:02.206 "lvs/lvol" 00:15:02.206 ], 00:15:02.206 "product_name": "Logical Volume", 00:15:02.206 "block_size": 4096, 00:15:02.206 "num_blocks": 38912, 00:15:02.206 "uuid": "9e6fa367-f86d-4c37-b1f1-2739b5d66f04", 00:15:02.206 "assigned_rate_limits": { 00:15:02.206 "rw_ios_per_sec": 0, 00:15:02.206 "rw_mbytes_per_sec": 0, 00:15:02.206 "r_mbytes_per_sec": 0, 00:15:02.206 "w_mbytes_per_sec": 0 00:15:02.206 }, 00:15:02.206 "claimed": false, 00:15:02.206 "zoned": false, 00:15:02.206 "supported_io_types": { 00:15:02.206 "read": true, 00:15:02.206 "write": true, 00:15:02.206 "unmap": true, 00:15:02.206 "write_zeroes": true, 00:15:02.206 "flush": false, 00:15:02.206 "reset": true, 00:15:02.206 "compare": false, 00:15:02.206 "compare_and_write": false, 00:15:02.206 "abort": false, 00:15:02.206 "nvme_admin": false, 00:15:02.206 "nvme_io": false 00:15:02.206 }, 00:15:02.206 "driver_specific": { 00:15:02.206 "lvol": { 00:15:02.206 "lvol_store_uuid": "1823235c-dcde-430e-acaf-f92d2100ac7f", 00:15:02.206 "base_bdev": "aio_bdev", 00:15:02.206 "thin_provision": false, 00:15:02.206 "num_allocated_clusters": 38, 00:15:02.206 "snapshot": false, 00:15:02.206 "clone": false, 00:15:02.206 "esnap_clone": false 00:15:02.206 } 00:15:02.206 } 00:15:02.206 } 00:15:02.206 ] 00:15:02.206 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:15:02.206 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:02.206 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:02.467 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:02.467 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:02.467 14:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:02.467 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:02.467 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9e6fa367-f86d-4c37-b1f1-2739b5d66f04 00:15:02.727 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1823235c-dcde-430e-acaf-f92d2100ac7f 00:15:02.988 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.248 00:15:03.248 real 0m16.779s 00:15:03.248 user 0m16.571s 00:15:03.248 sys 0m1.424s 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:03.248 ************************************ 00:15:03.248 END TEST lvs_grow_clean 00:15:03.248 ************************************ 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:03.248 ************************************ 00:15:03.248 START TEST lvs_grow_dirty 00:15:03.248 ************************************ 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.248 14:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.509 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:03.509 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:03.769 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:03.769 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:03.769 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:04.029 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:04.029 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:04.029 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 lvol 150 00:15:04.290 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:04.290 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.290 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:04.290 [2024-06-10 14:23:41.871081] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:04.290 [2024-06-10 14:23:41.871132] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:04.290 true 00:15:04.550 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:04.550 14:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:04.550 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:04.550 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:04.810 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:05.069 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:05.329 [2024-06-10 14:23:42.689491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2974739 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2974739 /var/tmp/bdevperf.sock 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2974739 ']' 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:05.329 14:23:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:05.589 [2024-06-10 14:23:42.959375] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:05.589 [2024-06-10 14:23:42.959426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974739 ] 00:15:05.589 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.589 [2024-06-10 14:23:43.017276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.589 [2024-06-10 14:23:43.082167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.529 14:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:06.529 14:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:06.529 14:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:06.789 Nvme0n1 00:15:06.789 14:23:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:07.049 [ 00:15:07.049 { 00:15:07.049 "name": "Nvme0n1", 00:15:07.049 "aliases": [ 00:15:07.049 "54d45a5a-2995-4d29-b97b-4e81e3096e3e" 00:15:07.049 ], 00:15:07.049 "product_name": "NVMe disk", 00:15:07.049 "block_size": 4096, 00:15:07.049 "num_blocks": 38912, 00:15:07.049 "uuid": "54d45a5a-2995-4d29-b97b-4e81e3096e3e", 00:15:07.049 "assigned_rate_limits": { 00:15:07.049 "rw_ios_per_sec": 0, 00:15:07.049 "rw_mbytes_per_sec": 0, 00:15:07.050 "r_mbytes_per_sec": 0, 00:15:07.050 "w_mbytes_per_sec": 0 00:15:07.050 }, 00:15:07.050 "claimed": false, 00:15:07.050 "zoned": false, 00:15:07.050 "supported_io_types": { 00:15:07.050 "read": true, 00:15:07.050 "write": true, 00:15:07.050 "unmap": true, 00:15:07.050 "write_zeroes": true, 00:15:07.050 "flush": true, 00:15:07.050 "reset": true, 00:15:07.050 "compare": true, 00:15:07.050 "compare_and_write": true, 00:15:07.050 "abort": true, 00:15:07.050 "nvme_admin": true, 00:15:07.050 "nvme_io": true 00:15:07.050 }, 00:15:07.050 "memory_domains": [ 00:15:07.050 { 00:15:07.050 "dma_device_id": "system", 00:15:07.050 "dma_device_type": 1 00:15:07.050 } 00:15:07.050 ], 00:15:07.050 "driver_specific": { 00:15:07.050 "nvme": [ 00:15:07.050 { 00:15:07.050 "trid": { 00:15:07.050 "trtype": "TCP", 00:15:07.050 "adrfam": "IPv4", 00:15:07.050 "traddr": "10.0.0.2", 00:15:07.050 "trsvcid": "4420", 00:15:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:07.050 }, 00:15:07.050 "ctrlr_data": { 00:15:07.050 "cntlid": 1, 00:15:07.050 "vendor_id": "0x8086", 00:15:07.050 "model_number": "SPDK bdev Controller", 00:15:07.050 "serial_number": "SPDK0", 00:15:07.050 "firmware_revision": "24.09", 00:15:07.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:07.050 "oacs": { 00:15:07.050 "security": 0, 00:15:07.050 "format": 0, 00:15:07.050 "firmware": 0, 00:15:07.050 "ns_manage": 0 00:15:07.050 }, 00:15:07.050 "multi_ctrlr": true, 00:15:07.050 "ana_reporting": false 00:15:07.050 }, 00:15:07.050 "vs": { 00:15:07.050 "nvme_version": "1.3" 00:15:07.050 }, 00:15:07.050 "ns_data": { 00:15:07.050 "id": 1, 00:15:07.050 "can_share": true 00:15:07.050 } 00:15:07.050 } 00:15:07.050 ], 00:15:07.050 "mp_policy": "active_passive" 00:15:07.050 } 00:15:07.050 } 00:15:07.050 ] 00:15:07.050 14:23:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2975081 00:15:07.050 14:23:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:07.050 14:23:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.050 Running I/O for 10 seconds... 00:15:07.990 Latency(us) 00:15:07.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.990 Nvme0n1 : 1.00 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:15:07.990 =================================================================================================================== 00:15:07.990 Total : 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:15:07.990 00:15:08.930 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:09.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.190 Nvme0n1 : 2.00 18168.50 70.97 0.00 0.00 0.00 0.00 0.00 00:15:09.190 =================================================================================================================== 00:15:09.190 Total : 18168.50 70.97 0.00 0.00 0.00 0.00 0.00 00:15:09.190 00:15:09.190 true 00:15:09.190 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:09.190 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:09.451 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:09.451 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:09.451 14:23:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2975081 00:15:10.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.022 Nvme0n1 : 3.00 18224.00 71.19 0.00 0.00 0.00 0.00 0.00 00:15:10.022 =================================================================================================================== 00:15:10.022 Total : 18224.00 71.19 0.00 0.00 0.00 0.00 0.00 00:15:10.022 00:15:10.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.962 Nvme0n1 : 4.00 18273.25 71.38 0.00 0.00 0.00 0.00 0.00 00:15:10.962 =================================================================================================================== 00:15:10.962 Total : 18273.25 71.38 0.00 0.00 0.00 0.00 0.00 00:15:10.962 00:15:12.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.347 Nvme0n1 : 5.00 18292.20 71.45 0.00 0.00 0.00 0.00 0.00 00:15:12.347 =================================================================================================================== 00:15:12.347 Total : 18292.20 71.45 0.00 0.00 0.00 0.00 0.00 00:15:12.347 00:15:13.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.289 Nvme0n1 : 6.00 18308.17 71.52 0.00 0.00 0.00 0.00 0.00 00:15:13.290 =================================================================================================================== 00:15:13.290 Total : 18308.17 71.52 0.00 0.00 0.00 0.00 0.00 00:15:13.290 00:15:14.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.270 Nvme0n1 : 7.00 18323.43 71.58 0.00 0.00 0.00 0.00 0.00 00:15:14.270 =================================================================================================================== 00:15:14.270 Total : 18323.43 71.58 0.00 0.00 0.00 0.00 0.00 00:15:14.270 00:15:15.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.211 Nvme0n1 : 8.00 18341.62 71.65 0.00 0.00 0.00 0.00 0.00 00:15:15.211 =================================================================================================================== 00:15:15.211 Total : 18341.62 71.65 0.00 0.00 0.00 0.00 0.00 00:15:15.211 00:15:16.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.152 Nvme0n1 : 9.00 18356.22 71.70 0.00 0.00 0.00 0.00 0.00 00:15:16.152 =================================================================================================================== 00:15:16.152 Total : 18356.22 71.70 0.00 0.00 0.00 0.00 0.00 00:15:16.152 00:15:17.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.093 Nvme0n1 : 10.00 18362.30 71.73 0.00 0.00 0.00 0.00 0.00 00:15:17.093 =================================================================================================================== 00:15:17.093 Total : 18362.30 71.73 0.00 0.00 0.00 0.00 0.00 00:15:17.093 00:15:17.093 00:15:17.093 Latency(us) 00:15:17.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.093 Nvme0n1 : 10.01 18364.97 71.74 0.00 0.00 6966.24 4232.53 16711.68 00:15:17.093 =================================================================================================================== 00:15:17.093 Total : 18364.97 71.74 0.00 0.00 6966.24 4232.53 16711.68 00:15:17.093 0 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2974739 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2974739 ']' 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2974739 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2974739 00:15:17.093 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:17.094 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:17.094 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2974739' 00:15:17.094 killing process with pid 2974739 00:15:17.094 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2974739 00:15:17.094 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.094 00:15:17.094 Latency(us) 00:15:17.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.094 =================================================================================================================== 00:15:17.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.094 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2974739 00:15:17.354 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:17.614 14:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:17.614 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:17.614 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2970620 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2970620 00:15:17.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2970620 Killed "${NVMF_APP[@]}" "$@" 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2977113 00:15:17.874 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2977113 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2977113 ']' 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:17.875 14:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:18.135 [2024-06-10 14:23:55.482129] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:18.135 [2024-06-10 14:23:55.482188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.135 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.135 [2024-06-10 14:23:55.568212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.135 [2024-06-10 14:23:55.633784] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.135 [2024-06-10 14:23:55.633819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.135 [2024-06-10 14:23:55.633826] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.135 [2024-06-10 14:23:55.633833] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.135 [2024-06-10 14:23:55.633838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.135 [2024-06-10 14:23:55.633860] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:19.077 [2024-06-10 14:23:56.571296] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:19.077 [2024-06-10 14:23:56.571390] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:19.077 [2024-06-10 14:23:56.571419] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:19.077 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:19.337 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54d45a5a-2995-4d29-b97b-4e81e3096e3e -t 2000 00:15:19.597 [ 00:15:19.597 { 00:15:19.597 "name": "54d45a5a-2995-4d29-b97b-4e81e3096e3e", 00:15:19.597 "aliases": [ 00:15:19.597 "lvs/lvol" 00:15:19.597 ], 00:15:19.597 "product_name": "Logical Volume", 00:15:19.597 "block_size": 4096, 00:15:19.597 "num_blocks": 38912, 00:15:19.597 "uuid": "54d45a5a-2995-4d29-b97b-4e81e3096e3e", 00:15:19.597 "assigned_rate_limits": { 00:15:19.597 "rw_ios_per_sec": 0, 00:15:19.597 "rw_mbytes_per_sec": 0, 00:15:19.597 "r_mbytes_per_sec": 0, 00:15:19.597 "w_mbytes_per_sec": 0 00:15:19.597 }, 00:15:19.597 "claimed": false, 00:15:19.597 "zoned": false, 00:15:19.597 "supported_io_types": { 00:15:19.597 "read": true, 00:15:19.597 "write": true, 00:15:19.597 "unmap": true, 00:15:19.597 "write_zeroes": true, 00:15:19.597 "flush": false, 00:15:19.597 "reset": true, 00:15:19.597 "compare": false, 00:15:19.597 "compare_and_write": false, 00:15:19.597 "abort": false, 00:15:19.597 "nvme_admin": false, 00:15:19.597 "nvme_io": false 00:15:19.597 }, 00:15:19.597 "driver_specific": { 00:15:19.597 "lvol": { 00:15:19.597 "lvol_store_uuid": "a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6", 00:15:19.597 "base_bdev": "aio_bdev", 00:15:19.597 "thin_provision": false, 00:15:19.597 "num_allocated_clusters": 38, 00:15:19.597 "snapshot": false, 00:15:19.597 "clone": false, 00:15:19.597 "esnap_clone": false 00:15:19.597 } 00:15:19.597 } 00:15:19.597 } 00:15:19.597 ] 00:15:19.597 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:19.597 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:19.597 14:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:19.867 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:19.867 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:19.867 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:19.867 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:19.867 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:20.127 [2024-06-10 14:23:57.591891] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:20.127 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:20.387 request: 00:15:20.387 { 00:15:20.387 "uuid": "a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6", 00:15:20.387 "method": "bdev_lvol_get_lvstores", 00:15:20.387 "req_id": 1 00:15:20.387 } 00:15:20.387 Got JSON-RPC error response 00:15:20.387 response: 00:15:20.387 { 00:15:20.387 "code": -19, 00:15:20.387 "message": "No such device" 00:15:20.387 } 00:15:20.387 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:20.387 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:20.387 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:20.387 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:20.387 14:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:20.647 aio_bdev 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:20.647 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:20.906 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54d45a5a-2995-4d29-b97b-4e81e3096e3e -t 2000 00:15:20.906 [ 00:15:20.906 { 00:15:20.906 "name": "54d45a5a-2995-4d29-b97b-4e81e3096e3e", 00:15:20.906 "aliases": [ 00:15:20.906 "lvs/lvol" 00:15:20.906 ], 00:15:20.906 "product_name": "Logical Volume", 00:15:20.906 "block_size": 4096, 00:15:20.906 "num_blocks": 38912, 00:15:20.906 "uuid": "54d45a5a-2995-4d29-b97b-4e81e3096e3e", 00:15:20.906 "assigned_rate_limits": { 00:15:20.906 "rw_ios_per_sec": 0, 00:15:20.906 "rw_mbytes_per_sec": 0, 00:15:20.906 "r_mbytes_per_sec": 0, 00:15:20.906 "w_mbytes_per_sec": 0 00:15:20.906 }, 00:15:20.906 "claimed": false, 00:15:20.906 "zoned": false, 00:15:20.906 "supported_io_types": { 00:15:20.906 "read": true, 00:15:20.906 "write": true, 00:15:20.906 "unmap": true, 00:15:20.906 "write_zeroes": true, 00:15:20.906 "flush": false, 00:15:20.906 "reset": true, 00:15:20.906 "compare": false, 00:15:20.906 "compare_and_write": false, 00:15:20.906 "abort": false, 00:15:20.906 "nvme_admin": false, 00:15:20.906 "nvme_io": false 00:15:20.906 }, 00:15:20.906 "driver_specific": { 00:15:20.906 "lvol": { 00:15:20.906 "lvol_store_uuid": "a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6", 00:15:20.906 "base_bdev": "aio_bdev", 00:15:20.906 "thin_provision": false, 00:15:20.906 "num_allocated_clusters": 38, 00:15:20.906 "snapshot": false, 00:15:20.906 "clone": false, 00:15:20.906 "esnap_clone": false 00:15:20.906 } 00:15:20.906 } 00:15:20.906 } 00:15:20.906 ] 00:15:20.906 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:20.906 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:20.906 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:21.166 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:21.166 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:21.166 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:21.426 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:21.426 14:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 54d45a5a-2995-4d29-b97b-4e81e3096e3e 00:15:21.687 14:23:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1cbd863-aca5-4f5a-a6d2-7c6c13e213a6 00:15:21.948 14:23:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:21.948 14:23:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:22.208 00:15:22.208 real 0m18.739s 00:15:22.208 user 0m48.811s 00:15:22.208 sys 0m3.013s 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:22.208 ************************************ 00:15:22.208 END TEST lvs_grow_dirty 00:15:22.208 ************************************ 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:22.208 nvmf_trace.0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:22.208 rmmod nvme_tcp 00:15:22.208 rmmod nvme_fabrics 00:15:22.208 rmmod nvme_keyring 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2977113 ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2977113 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2977113 ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2977113 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2977113 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2977113' 00:15:22.208 killing process with pid 2977113 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2977113 00:15:22.208 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2977113 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.468 14:23:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.013 14:24:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:25.013 00:15:25.013 real 0m46.556s 00:15:25.013 user 1m12.300s 00:15:25.013 sys 0m10.204s 00:15:25.013 14:24:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:25.013 14:24:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:25.013 ************************************ 00:15:25.013 END TEST nvmf_lvs_grow 00:15:25.013 ************************************ 00:15:25.013 14:24:02 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:25.013 14:24:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:25.013 14:24:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:25.013 14:24:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.013 ************************************ 00:15:25.013 START TEST nvmf_bdev_io_wait 00:15:25.013 ************************************ 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:25.013 * Looking for test storage... 00:15:25.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.013 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:25.014 14:24:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.604 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:31.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:31.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:31.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:31.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.605 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:31.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:15:31.867 00:15:31.867 --- 10.0.0.2 ping statistics --- 00:15:31.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.867 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:15:31.867 00:15:31.867 --- 10.0.0.1 ping statistics --- 00:15:31.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.867 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2982445 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2982445 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2982445 ']' 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:31.867 14:24:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:32.130 [2024-06-10 14:24:09.473942] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:32.130 [2024-06-10 14:24:09.473998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.130 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.130 [2024-06-10 14:24:09.558297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.130 [2024-06-10 14:24:09.658422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.130 [2024-06-10 14:24:09.658477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.130 [2024-06-10 14:24:09.658491] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.130 [2024-06-10 14:24:09.658499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.130 [2024-06-10 14:24:09.658506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.130 [2024-06-10 14:24:09.658662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.130 [2024-06-10 14:24:09.658824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.130 [2024-06-10 14:24:09.658988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.130 [2024-06-10 14:24:09.658988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.073 [2024-06-10 14:24:10.462813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.073 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.073 Malloc0 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:33.074 [2024-06-10 14:24:10.533551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2982968 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2982970 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.074 { 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme$subsystem", 00:15:33.074 "trtype": "$TEST_TRANSPORT", 00:15:33.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "$NVMF_PORT", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.074 "hdgst": ${hdgst:-false}, 00:15:33.074 "ddgst": ${ddgst:-false} 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 } 00:15:33.074 EOF 00:15:33.074 )") 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2982972 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.074 { 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme$subsystem", 00:15:33.074 "trtype": "$TEST_TRANSPORT", 00:15:33.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "$NVMF_PORT", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.074 "hdgst": ${hdgst:-false}, 00:15:33.074 "ddgst": ${ddgst:-false} 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 } 00:15:33.074 EOF 00:15:33.074 )") 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2982975 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.074 { 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme$subsystem", 00:15:33.074 "trtype": "$TEST_TRANSPORT", 00:15:33.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "$NVMF_PORT", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.074 "hdgst": ${hdgst:-false}, 00:15:33.074 "ddgst": ${ddgst:-false} 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 } 00:15:33.074 EOF 00:15:33.074 )") 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.074 { 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme$subsystem", 00:15:33.074 "trtype": "$TEST_TRANSPORT", 00:15:33.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "$NVMF_PORT", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.074 "hdgst": ${hdgst:-false}, 00:15:33.074 "ddgst": ${ddgst:-false} 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 } 00:15:33.074 EOF 00:15:33.074 )") 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2982968 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme1", 00:15:33.074 "trtype": "tcp", 00:15:33.074 "traddr": "10.0.0.2", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "4420", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.074 "hdgst": false, 00:15:33.074 "ddgst": false 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 }' 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme1", 00:15:33.074 "trtype": "tcp", 00:15:33.074 "traddr": "10.0.0.2", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "4420", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.074 "hdgst": false, 00:15:33.074 "ddgst": false 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 }' 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme1", 00:15:33.074 "trtype": "tcp", 00:15:33.074 "traddr": "10.0.0.2", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "4420", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.074 "hdgst": false, 00:15:33.074 "ddgst": false 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 }' 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:33.074 14:24:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.074 "params": { 00:15:33.074 "name": "Nvme1", 00:15:33.074 "trtype": "tcp", 00:15:33.074 "traddr": "10.0.0.2", 00:15:33.074 "adrfam": "ipv4", 00:15:33.074 "trsvcid": "4420", 00:15:33.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.074 "hdgst": false, 00:15:33.074 "ddgst": false 00:15:33.074 }, 00:15:33.074 "method": "bdev_nvme_attach_controller" 00:15:33.074 }' 00:15:33.074 [2024-06-10 14:24:10.586064] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:33.075 [2024-06-10 14:24:10.586117] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:33.075 [2024-06-10 14:24:10.588896] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:33.075 [2024-06-10 14:24:10.588942] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:33.075 [2024-06-10 14:24:10.589235] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:33.075 [2024-06-10 14:24:10.589277] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:33.075 [2024-06-10 14:24:10.597705] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:33.075 [2024-06-10 14:24:10.597797] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:33.075 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.334 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.334 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.334 [2024-06-10 14:24:10.726976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.334 [2024-06-10 14:24:10.769554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.334 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.334 [2024-06-10 14:24:10.778925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:33.334 [2024-06-10 14:24:10.819051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.334 [2024-06-10 14:24:10.821196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:33.334 [2024-06-10 14:24:10.869307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:33.334 [2024-06-10 14:24:10.875820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.334 [2024-06-10 14:24:10.927815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:33.593 Running I/O for 1 seconds... 00:15:33.593 Running I/O for 1 seconds... 00:15:33.593 Running I/O for 1 seconds... 00:15:33.593 Running I/O for 1 seconds... 00:15:34.536 00:15:34.536 Latency(us) 00:15:34.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.536 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:34.536 Nvme1n1 : 1.01 13036.11 50.92 0.00 0.00 9786.32 5406.72 18677.76 00:15:34.536 =================================================================================================================== 00:15:34.536 Total : 13036.11 50.92 0.00 0.00 9786.32 5406.72 18677.76 00:15:34.536 00:15:34.536 Latency(us) 00:15:34.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.536 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:34.536 Nvme1n1 : 1.01 11733.42 45.83 0.00 0.00 10869.15 6471.68 18568.53 00:15:34.536 =================================================================================================================== 00:15:34.536 Total : 11733.42 45.83 0.00 0.00 10869.15 6471.68 18568.53 00:15:34.536 00:15:34.536 Latency(us) 00:15:34.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.536 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:34.536 Nvme1n1 : 1.00 18920.06 73.91 0.00 0.00 6749.49 3372.37 18350.08 00:15:34.536 =================================================================================================================== 00:15:34.536 Total : 18920.06 73.91 0.00 0.00 6749.49 3372.37 18350.08 00:15:34.796 00:15:34.796 Latency(us) 00:15:34.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.796 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:34.796 Nvme1n1 : 1.00 185844.34 725.95 0.00 0.00 686.31 274.77 768.00 00:15:34.796 =================================================================================================================== 00:15:34.797 Total : 185844.34 725.95 0.00 0.00 686.31 274.77 768.00 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2982970 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2982972 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2982975 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.797 rmmod nvme_tcp 00:15:34.797 rmmod nvme_fabrics 00:15:34.797 rmmod nvme_keyring 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2982445 ']' 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2982445 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2982445 ']' 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2982445 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:34.797 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2982445 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2982445' 00:15:35.058 killing process with pid 2982445 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2982445 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2982445 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.058 14:24:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.602 14:24:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.602 00:15:37.602 real 0m12.544s 00:15:37.602 user 0m18.985s 00:15:37.602 sys 0m6.739s 00:15:37.602 14:24:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:37.602 14:24:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:37.602 ************************************ 00:15:37.602 END TEST nvmf_bdev_io_wait 00:15:37.602 ************************************ 00:15:37.602 14:24:14 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:37.602 14:24:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:37.602 14:24:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:37.602 14:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.602 ************************************ 00:15:37.602 START TEST nvmf_queue_depth 00:15:37.602 ************************************ 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:37.602 * Looking for test storage... 00:15:37.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.602 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.603 14:24:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.186 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:44.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:44.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:44.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:44.187 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:44.187 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.448 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.448 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.448 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:44.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:15:44.448 00:15:44.448 --- 10.0.0.2 ping statistics --- 00:15:44.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.448 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:15:44.448 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:15:44.449 00:15:44.449 --- 10.0.0.1 ping statistics --- 00:15:44.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.449 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2987687 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2987687 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2987687 ']' 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:44.449 14:24:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:44.449 [2024-06-10 14:24:21.935148] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:44.449 [2024-06-10 14:24:21.935212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.449 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.449 [2024-06-10 14:24:22.007040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.762 [2024-06-10 14:24:22.080477] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.762 [2024-06-10 14:24:22.080514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.762 [2024-06-10 14:24:22.080522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.762 [2024-06-10 14:24:22.080529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.762 [2024-06-10 14:24:22.080534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.762 [2024-06-10 14:24:22.080556] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 [2024-06-10 14:24:22.839752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 Malloc0 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.336 [2024-06-10 14:24:22.910329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2987808 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2987808 /var/tmp/bdevperf.sock 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2987808 ']' 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:45.336 14:24:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.596 [2024-06-10 14:24:22.961349] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:15:45.596 [2024-06-10 14:24:22.961395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987808 ] 00:15:45.596 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.596 [2024-06-10 14:24:23.034261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.596 [2024-06-10 14:24:23.098842] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.596 14:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:45.596 14:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:45.596 14:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:45.596 14:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.597 14:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:45.856 NVMe0n1 00:15:45.856 14:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.856 14:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.856 Running I/O for 10 seconds... 00:15:55.857 00:15:55.857 Latency(us) 00:15:55.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.857 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:55.857 Verification LBA range: start 0x0 length 0x4000 00:15:55.857 NVMe0n1 : 10.05 9477.63 37.02 0.00 0.00 107618.17 4341.76 72963.41 00:15:55.857 =================================================================================================================== 00:15:55.857 Total : 9477.63 37.02 0.00 0.00 107618.17 4341.76 72963.41 00:15:55.857 0 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2987808 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2987808 ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2987808 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2987808 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2987808' 00:15:56.118 killing process with pid 2987808 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2987808 00:15:56.118 Received shutdown signal, test time was about 10.000000 seconds 00:15:56.118 00:15:56.118 Latency(us) 00:15:56.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.118 =================================================================================================================== 00:15:56.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2987808 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.118 rmmod nvme_tcp 00:15:56.118 rmmod nvme_fabrics 00:15:56.118 rmmod nvme_keyring 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2987687 ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2987687 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2987687 ']' 00:15:56.118 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2987687 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2987687 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2987687' 00:15:56.390 killing process with pid 2987687 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2987687 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2987687 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.390 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.391 14:24:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.936 14:24:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.936 00:15:58.936 real 0m21.280s 00:15:58.936 user 0m24.485s 00:15:58.936 sys 0m6.276s 00:15:58.936 14:24:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.936 14:24:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:58.936 ************************************ 00:15:58.936 END TEST nvmf_queue_depth 00:15:58.936 ************************************ 00:15:58.936 14:24:36 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:58.936 14:24:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:58.936 14:24:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:58.936 14:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.936 ************************************ 00:15:58.936 START TEST nvmf_target_multipath 00:15:58.936 ************************************ 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:58.936 * Looking for test storage... 00:15:58.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.936 14:24:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.937 14:24:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:05.529 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:05.529 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:05.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:05.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.529 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.530 14:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.530 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.530 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.530 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:05.530 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:05.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:16:05.791 00:16:05.791 --- 10.0.0.2 ping statistics --- 00:16:05.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.791 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:16:05.791 00:16:05.791 --- 10.0.0.1 ping statistics --- 00:16:05.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.791 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:05.791 only one NIC for nvmf test 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.791 rmmod nvme_tcp 00:16:05.791 rmmod nvme_fabrics 00:16:05.791 rmmod nvme_keyring 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.791 14:24:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.339 00:16:08.339 real 0m9.363s 00:16:08.339 user 0m2.027s 00:16:08.339 sys 0m5.246s 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:08.339 14:24:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:08.339 ************************************ 00:16:08.339 END TEST nvmf_target_multipath 00:16:08.339 ************************************ 00:16:08.339 14:24:45 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:08.339 14:24:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:08.339 14:24:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:08.339 14:24:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.339 ************************************ 00:16:08.339 START TEST nvmf_zcopy 00:16:08.339 ************************************ 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:08.339 * Looking for test storage... 00:16:08.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.339 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.340 14:24:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:14.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:14.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:14.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:14.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.927 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.187 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.187 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.187 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.736 ms 00:16:15.188 00:16:15.188 --- 10.0.0.2 ping statistics --- 00:16:15.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.188 rtt min/avg/max/mdev = 0.736/0.736/0.736/0.000 ms 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:16:15.188 00:16:15.188 --- 10.0.0.1 ping statistics --- 00:16:15.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.188 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2998230 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2998230 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2998230 ']' 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:15.188 14:24:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.448 [2024-06-10 14:24:52.799184] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:16:15.448 [2024-06-10 14:24:52.799247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.448 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.448 [2024-06-10 14:24:52.870997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.448 [2024-06-10 14:24:52.943387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.448 [2024-06-10 14:24:52.943427] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.448 [2024-06-10 14:24:52.943435] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.448 [2024-06-10 14:24:52.943441] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.448 [2024-06-10 14:24:52.943447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.448 [2024-06-10 14:24:52.943472] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.448 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:15.448 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:16:15.448 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.448 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:15.448 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 [2024-06-10 14:24:53.060921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 [2024-06-10 14:24:53.085104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 malloc0 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:15.710 { 00:16:15.710 "params": { 00:16:15.710 "name": "Nvme$subsystem", 00:16:15.710 "trtype": "$TEST_TRANSPORT", 00:16:15.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.710 "adrfam": "ipv4", 00:16:15.710 "trsvcid": "$NVMF_PORT", 00:16:15.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.710 "hdgst": ${hdgst:-false}, 00:16:15.710 "ddgst": ${ddgst:-false} 00:16:15.710 }, 00:16:15.710 "method": "bdev_nvme_attach_controller" 00:16:15.710 } 00:16:15.710 EOF 00:16:15.710 )") 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:15.710 14:24:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:15.711 "params": { 00:16:15.711 "name": "Nvme1", 00:16:15.711 "trtype": "tcp", 00:16:15.711 "traddr": "10.0.0.2", 00:16:15.711 "adrfam": "ipv4", 00:16:15.711 "trsvcid": "4420", 00:16:15.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.711 "hdgst": false, 00:16:15.711 "ddgst": false 00:16:15.711 }, 00:16:15.711 "method": "bdev_nvme_attach_controller" 00:16:15.711 }' 00:16:15.711 [2024-06-10 14:24:53.178709] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:16:15.711 [2024-06-10 14:24:53.178755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998434 ] 00:16:15.711 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.711 [2024-06-10 14:24:53.254557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.971 [2024-06-10 14:24:53.319196] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.231 Running I/O for 10 seconds... 00:16:26.223 00:16:26.223 Latency(us) 00:16:26.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:26.223 Verification LBA range: start 0x0 length 0x1000 00:16:26.223 Nvme1n1 : 10.01 6864.35 53.63 0.00 0.00 18589.71 873.81 27197.44 00:16:26.223 =================================================================================================================== 00:16:26.223 Total : 6864.35 53.63 0.00 0.00 18589.71 873.81 27197.44 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3000471 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:26.484 { 00:16:26.484 "params": { 00:16:26.484 "name": "Nvme$subsystem", 00:16:26.484 "trtype": "$TEST_TRANSPORT", 00:16:26.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.484 "adrfam": "ipv4", 00:16:26.484 "trsvcid": "$NVMF_PORT", 00:16:26.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.484 "hdgst": ${hdgst:-false}, 00:16:26.484 "ddgst": ${ddgst:-false} 00:16:26.484 }, 00:16:26.484 "method": "bdev_nvme_attach_controller" 00:16:26.484 } 00:16:26.484 EOF 00:16:26.484 )") 00:16:26.484 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:26.484 [2024-06-10 14:25:03.836805] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.484 [2024-06-10 14:25:03.836840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:26.485 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:26.485 14:25:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:26.485 "params": { 00:16:26.485 "name": "Nvme1", 00:16:26.485 "trtype": "tcp", 00:16:26.485 "traddr": "10.0.0.2", 00:16:26.485 "adrfam": "ipv4", 00:16:26.485 "trsvcid": "4420", 00:16:26.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.485 "hdgst": false, 00:16:26.485 "ddgst": false 00:16:26.485 }, 00:16:26.485 "method": "bdev_nvme_attach_controller" 00:16:26.485 }' 00:16:26.485 [2024-06-10 14:25:03.848800] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.848811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.860829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.860840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.872862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.872871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.877053] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:16:26.485 [2024-06-10 14:25:03.877098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000471 ] 00:16:26.485 [2024-06-10 14:25:03.884895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.884904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.896927] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.896937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.485 [2024-06-10 14:25:03.908960] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.908970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.920991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.921000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.933023] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.933033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.945055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.945065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.952388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.485 [2024-06-10 14:25:03.957087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.957098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.969127] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.969140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.981153] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.981164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:03.993186] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:03.993200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.005218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.005228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.016838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.485 [2024-06-10 14:25:04.017250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.017260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.029285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.029298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.041322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.041335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.053350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.053361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.065381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.065392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.485 [2024-06-10 14:25:04.077411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.485 [2024-06-10 14:25:04.077421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.089442] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.089454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.101490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.101507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.113511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.113524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.125544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.125555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.137576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.137586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.149609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.149618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.161645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.161657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.173676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.173688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.185709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.185718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.197739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.197748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.209776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.209785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.221810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.221822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.233845] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.233854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.245878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.245887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.257915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.257926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.269944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.269954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.281975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.281985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.294009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.294017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.306043] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.306054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 [2024-06-10 14:25:04.318094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.318112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.780 Running I/O for 5 seconds... 00:16:26.780 [2024-06-10 14:25:04.330112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.780 [2024-06-10 14:25:04.330121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.781 [2024-06-10 14:25:04.346306] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.781 [2024-06-10 14:25:04.346331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.781 [2024-06-10 14:25:04.362797] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.781 [2024-06-10 14:25:04.362817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.379017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.379036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.391470] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.391488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.408332] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.408351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.424620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.424637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.441889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.441907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.459281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.459299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.476084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.476102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.493223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.493241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.048 [2024-06-10 14:25:04.510392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.048 [2024-06-10 14:25:04.510419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.526637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.526655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.544296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.544320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.561504] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.561524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.578031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.578049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.594557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.594576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.612006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.612024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.049 [2024-06-10 14:25:04.628330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.049 [2024-06-10 14:25:04.628348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.645072] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.645091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.661333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.661351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.678647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.678665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.695600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.695618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.712205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.712222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.729637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.729655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.745420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.745438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.762216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.762234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.778507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.778525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.789841] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.789859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.806008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.806026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.822505] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.822527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.840009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.840027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.855507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.855525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.866755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.866773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.884206] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.884224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.309 [2024-06-10 14:25:04.899666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.309 [2024-06-10 14:25:04.899683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.911104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.911122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.927193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.927211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.944375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.944392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.961666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.961684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.977574] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.977591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:04.995182] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:04.995200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.012352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.012370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.029147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.029165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.040697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.040714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.056557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.056575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.073482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.073500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.090551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.090568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.107428] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.107446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.124321] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.124343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.141660] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.141678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.569 [2024-06-10 14:25:05.158702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.569 [2024-06-10 14:25:05.158720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.175869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.175887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.192962] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.192979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.208749] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.208767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.219957] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.219974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.235953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.235970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.252621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.252639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.269748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.269766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.286948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.286966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.303969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.303986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.320969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.320987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.337283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.337300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.353971] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.353988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.371214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.371231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.388111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.388129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.404763] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.404780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.830 [2024-06-10 14:25:05.421122] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.830 [2024-06-10 14:25:05.421140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.438518] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.438536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.454618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.454635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.472352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.472370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.488558] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.488575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.505111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.505127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.522179] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.522196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.539744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.539761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.555724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.555741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.566953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.566971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.583775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.583793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.600670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.600687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.617939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.617956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.633825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.633842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.091 [2024-06-10 14:25:05.651137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.091 [2024-06-10 14:25:05.651154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.092 [2024-06-10 14:25:05.668051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.092 [2024-06-10 14:25:05.668069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.092 [2024-06-10 14:25:05.685177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.092 [2024-06-10 14:25:05.685195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.702528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.702545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.718772] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.718789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.736216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.736234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.752564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.752582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.770216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.770234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.785273] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.785290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.801844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.801861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.817978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.817996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.835418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.835436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.852158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.852175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.869086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.869104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.886031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.886048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.903164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.903181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.920421] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.920439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.353 [2024-06-10 14:25:05.937159] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.353 [2024-06-10 14:25:05.937177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:05.954256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:05.954274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:05.971309] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:05.971333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:05.988221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:05.988239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.005281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.005300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.021528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.021546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.038956] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.038974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.055664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.055681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.071851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.071869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.613 [2024-06-10 14:25:06.089327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.613 [2024-06-10 14:25:06.089345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.105755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.105773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.121977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.121994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.133223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.133240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.149311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.149334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.166157] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.166175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.183732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.183750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.614 [2024-06-10 14:25:06.199012] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.614 [2024-06-10 14:25:06.199030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.210070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.210088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.226224] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.226242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.242852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.242871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.259906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.259925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.275256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.275274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.286566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.286584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.302639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.302657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.319283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.319300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.336527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.336545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.353356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.353373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.370080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.370098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.387459] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.387477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.403979] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.403996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.420737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.420754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.437885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.437903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.875 [2024-06-10 14:25:06.455209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.875 [2024-06-10 14:25:06.455227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.136 [2024-06-10 14:25:06.471611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.136 [2024-06-10 14:25:06.471629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.136 [2024-06-10 14:25:06.489140] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.136 [2024-06-10 14:25:06.489158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.136 [2024-06-10 14:25:06.506031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.136 [2024-06-10 14:25:06.506049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.522923] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.522941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.538770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.538787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.550019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.550038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.566548] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.566566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.583219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.583237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.600095] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.600113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.616608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.616626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.632943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.632961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.649828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.649846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.666380] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.666402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.683983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.684001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.699297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.699322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.137 [2024-06-10 14:25:06.715490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.137 [2024-06-10 14:25:06.715508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.732816] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.732833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.749181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.749200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.766324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.766342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.783156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.783174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.800201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.800219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.816850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.816869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.834285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.834304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.851039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.851057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.868027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.868045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.884395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.884413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.901184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.901201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.918766] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.918784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.934715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.934733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.951989] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.952007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.968171] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.968188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.397 [2024-06-10 14:25:06.985483] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.397 [2024-06-10 14:25:06.985505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.002188] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.002206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.019384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.019401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.036514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.036532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.053305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.053327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.070185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.070203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.087280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.087297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.104156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.104174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.121361] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.121378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.138551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.138569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.155560] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.155578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.172850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.172868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.190100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.190117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.206294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.206312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.223280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.223297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.657 [2024-06-10 14:25:07.240525] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.657 [2024-06-10 14:25:07.240543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.256748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.256765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.267639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.267656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.283864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.283881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.300582] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.300603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.317458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.317476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.334704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.334722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.351312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.351334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.368919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.368937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.385321] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.385338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.916 [2024-06-10 14:25:07.402882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.916 [2024-06-10 14:25:07.402899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.419101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.419119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.436414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.436431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.453077] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.453094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.470619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.470637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.486591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.486608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.917 [2024-06-10 14:25:07.504068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.917 [2024-06-10 14:25:07.504086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.520336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.520354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.537187] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.537205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.553732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.553749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.570481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.570499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.587399] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.587416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.604331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.604349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.621511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.621533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.637663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.637681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.655210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.655227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.672342] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.672359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.689745] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.689762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.706787] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.706804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.723554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.723571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.740723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.740741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.178 [2024-06-10 14:25:07.757637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.178 [2024-06-10 14:25:07.757654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.439 [2024-06-10 14:25:07.774592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.439 [2024-06-10 14:25:07.774610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.439 [2024-06-10 14:25:07.791250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.439 [2024-06-10 14:25:07.791268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.439 [2024-06-10 14:25:07.808004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.439 [2024-06-10 14:25:07.808021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.439 [2024-06-10 14:25:07.825341] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.439 [2024-06-10 14:25:07.825359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.840892] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.840909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.852036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.852053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.868299] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.868323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.885342] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.885360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.902414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.902432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.919254] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.919272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.936535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.936553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.952738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.952756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.965062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.965080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.981752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.981769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:07.997875] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:07.997892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:08.015050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:08.015068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.440 [2024-06-10 14:25:08.032011] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.440 [2024-06-10 14:25:08.032028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.048168] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.048186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.059471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.059489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.076031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.076049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.092555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.092573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.110154] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.110172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.127115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.127133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.143706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.143724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.161055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.161073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.177585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.177603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.194240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.194257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.211887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.211904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.228350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.228367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.245579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.245598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.262404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.262422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.701 [2024-06-10 14:25:08.279577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.701 [2024-06-10 14:25:08.279595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.296909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.296927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.312474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.312491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.327673] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.327691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.338737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.338755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.355384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.355401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.372796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.372814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.389253] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.389271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.407048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.407066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.423112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.423130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.434440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.434458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.450596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.450614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.467776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.467794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.484835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.484852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.502151] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.502170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.518255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.962 [2024-06-10 14:25:08.518273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.962 [2024-06-10 14:25:08.529344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.963 [2024-06-10 14:25:08.529362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.963 [2024-06-10 14:25:08.545516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.963 [2024-06-10 14:25:08.545534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.562170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.562188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.580032] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.580049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.594700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.594717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.611078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.611095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.628279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.628296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.644533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.644550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.661666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.661684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.678656] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.678673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.695628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.695646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.712344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.712361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.729320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.729338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.745971] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.745989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.762670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.762689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.779384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.779401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.796198] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.796215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.224 [2024-06-10 14:25:08.813333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.224 [2024-06-10 14:25:08.813351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.830249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.830266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.847286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.847303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.864263] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.864280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.881584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.881602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.898117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.898134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.914847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.914865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.932236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.932254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.948672] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.948689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.966524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.966541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:08.983029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:08.983046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.000414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.000431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.016704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.016722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.034203] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.034220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.049950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.049968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.061105] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.061123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.485 [2024-06-10 14:25:09.077663] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.485 [2024-06-10 14:25:09.077681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.093769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.093787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.105200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.105218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.121270] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.121287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.138710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.138727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.154729] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.154753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.171479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.171497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.188308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.188329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.205122] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.205139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.222786] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.222804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.238338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.238357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.249576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.249593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.265978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.265996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.281914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.281932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.293288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.293306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.310283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.310301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.746 [2024-06-10 14:25:09.326846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.746 [2024-06-10 14:25:09.326863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.343257] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.343274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 00:16:32.005 Latency(us) 00:16:32.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.005 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:32.005 Nvme1n1 : 5.01 13544.17 105.81 0.00 0.00 9439.75 4532.91 19114.67 00:16:32.005 =================================================================================================================== 00:16:32.005 Total : 13544.17 105.81 0.00 0.00 9439.75 4532.91 19114.67 00:16:32.005 [2024-06-10 14:25:09.355281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.355298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.367317] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.367334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.379350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.379364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.391381] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.391400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.403411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.403423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.415439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.415451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.427471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.427481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.439506] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.439518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.451533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.451543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.463570] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.005 [2024-06-10 14:25:09.463581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.005 [2024-06-10 14:25:09.475600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.006 [2024-06-10 14:25:09.475609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3000471) - No such process 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3000471 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:32.006 delay0 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.006 14:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:32.006 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.264 [2024-06-10 14:25:09.674508] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:40.399 Initializing NVMe Controllers 00:16:40.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.400 Initialization complete. Launching workers. 00:16:40.400 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 18900 00:16:40.400 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19077, failed to submit 91 00:16:40.400 success 18966, unsuccess 111, failed 0 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.400 rmmod nvme_tcp 00:16:40.400 rmmod nvme_fabrics 00:16:40.400 rmmod nvme_keyring 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2998230 ']' 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2998230 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2998230 ']' 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2998230 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2998230 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2998230' 00:16:40.400 killing process with pid 2998230 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2998230 00:16:40.400 14:25:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2998230 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.400 14:25:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.784 14:25:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.784 00:16:41.784 real 0m33.577s 00:16:41.784 user 0m45.831s 00:16:41.784 sys 0m10.289s 00:16:41.784 14:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:41.784 14:25:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 ************************************ 00:16:41.784 END TEST nvmf_zcopy 00:16:41.784 ************************************ 00:16:41.784 14:25:19 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:41.784 14:25:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:41.784 14:25:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:41.784 14:25:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.784 ************************************ 00:16:41.784 START TEST nvmf_nmic 00:16:41.784 ************************************ 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:41.784 * Looking for test storage... 00:16:41.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.784 14:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:48.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:48.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:48.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:48.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.367 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.368 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.368 14:25:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:16:48.629 00:16:48.629 --- 10.0.0.2 ping statistics --- 00:16:48.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.629 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:16:48.629 00:16:48.629 --- 10.0.0.1 ping statistics --- 00:16:48.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.629 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.629 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3007139 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3007139 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 3007139 ']' 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:48.890 14:25:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.890 [2024-06-10 14:25:26.302916] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:16:48.890 [2024-06-10 14:25:26.302986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.890 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.890 [2024-06-10 14:25:26.389530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.150 [2024-06-10 14:25:26.486672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.150 [2024-06-10 14:25:26.486728] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.150 [2024-06-10 14:25:26.486736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.150 [2024-06-10 14:25:26.486743] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.150 [2024-06-10 14:25:26.486749] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.150 [2024-06-10 14:25:26.486879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.150 [2024-06-10 14:25:26.487020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.150 [2024-06-10 14:25:26.487190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.150 [2024-06-10 14:25:26.487191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 [2024-06-10 14:25:27.227154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 Malloc0 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.721 [2024-06-10 14:25:27.286534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.721 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:49.722 test case1: single bdev can't be used in multiple subsystems 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.722 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.982 [2024-06-10 14:25:27.322595] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:49.982 [2024-06-10 14:25:27.322614] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:49.982 [2024-06-10 14:25:27.322621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:49.982 request: 00:16:49.982 { 00:16:49.982 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:49.982 "namespace": { 00:16:49.982 "bdev_name": "Malloc0", 00:16:49.982 "no_auto_visible": false 00:16:49.982 }, 00:16:49.982 "method": "nvmf_subsystem_add_ns", 00:16:49.982 "req_id": 1 00:16:49.982 } 00:16:49.982 Got JSON-RPC error response 00:16:49.982 response: 00:16:49.982 { 00:16:49.982 "code": -32602, 00:16:49.982 "message": "Invalid parameters" 00:16:49.982 } 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:49.982 Adding namespace failed - expected result. 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:49.982 test case2: host connect to nvmf target in multiple paths 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.982 [2024-06-10 14:25:27.334728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:49.982 14:25:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.363 14:25:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:52.744 14:25:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.744 14:25:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:52.744 14:25:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.744 14:25:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:52.744 14:25:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:55.287 14:25:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:55.287 [global] 00:16:55.287 thread=1 00:16:55.287 invalidate=1 00:16:55.287 rw=write 00:16:55.287 time_based=1 00:16:55.287 runtime=1 00:16:55.287 ioengine=libaio 00:16:55.287 direct=1 00:16:55.287 bs=4096 00:16:55.287 iodepth=1 00:16:55.287 norandommap=0 00:16:55.287 numjobs=1 00:16:55.287 00:16:55.287 verify_dump=1 00:16:55.287 verify_backlog=512 00:16:55.287 verify_state_save=0 00:16:55.287 do_verify=1 00:16:55.287 verify=crc32c-intel 00:16:55.287 [job0] 00:16:55.287 filename=/dev/nvme0n1 00:16:55.287 Could not set queue depth (nvme0n1) 00:16:55.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.287 fio-3.35 00:16:55.287 Starting 1 thread 00:16:56.313 00:16:56.313 job0: (groupid=0, jobs=1): err= 0: pid=3008669: Mon Jun 10 14:25:33 2024 00:16:56.313 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:56.313 slat (nsec): min=7355, max=58684, avg=24909.81, stdev=3701.04 00:16:56.313 clat (usec): min=496, max=1270, avg=1000.02, stdev=82.69 00:16:56.313 lat (usec): min=522, max=1295, avg=1024.93, stdev=82.92 00:16:56.313 clat percentiles (usec): 00:16:56.313 | 1.00th=[ 693], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 955], 00:16:56.313 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:16:56.313 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:16:56.313 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1270], 99.95th=[ 1270], 00:16:56.313 | 99.99th=[ 1270] 00:16:56.313 write: IOPS=682, BW=2729KiB/s (2795kB/s)(2732KiB/1001msec); 0 zone resets 00:16:56.313 slat (usec): min=10, max=27791, avg=72.77, stdev=1062.20 00:16:56.313 clat (usec): min=294, max=1149, avg=608.89, stdev=83.15 00:16:56.313 lat (usec): min=305, max=28506, avg=681.66, stdev=1069.57 00:16:56.313 clat percentiles (usec): 00:16:56.313 | 1.00th=[ 355], 5.00th=[ 441], 10.00th=[ 506], 20.00th=[ 562], 00:16:56.313 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 635], 00:16:56.313 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 709], 00:16:56.313 | 99.00th=[ 742], 99.50th=[ 791], 99.90th=[ 1156], 99.95th=[ 1156], 00:16:56.313 | 99.99th=[ 1156] 00:16:56.313 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.313 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.313 lat (usec) : 500=5.19%, 750=52.13%, 1000=19.00% 00:16:56.313 lat (msec) : 2=23.68% 00:16:56.313 cpu : usr=1.70%, sys=3.70%, ctx=1199, majf=0, minf=1 00:16:56.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.313 issued rwts: total=512,683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.313 00:16:56.313 Run status group 0 (all jobs): 00:16:56.313 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:56.313 WRITE: bw=2729KiB/s (2795kB/s), 2729KiB/s-2729KiB/s (2795kB/s-2795kB/s), io=2732KiB (2798kB), run=1001-1001msec 00:16:56.313 00:16:56.313 Disk stats (read/write): 00:16:56.313 nvme0n1: ios=537/524, merge=0/0, ticks=1464/303, in_queue=1767, util=98.90% 00:16:56.313 14:25:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.574 rmmod nvme_tcp 00:16:56.574 rmmod nvme_fabrics 00:16:56.574 rmmod nvme_keyring 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3007139 ']' 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3007139 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 3007139 ']' 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 3007139 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:56.574 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3007139 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3007139' 00:16:56.834 killing process with pid 3007139 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 3007139 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 3007139 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.834 14:25:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.382 14:25:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.382 00:16:59.382 real 0m17.236s 00:16:59.382 user 0m47.687s 00:16:59.382 sys 0m6.025s 00:16:59.382 14:25:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:59.383 14:25:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:59.383 ************************************ 00:16:59.383 END TEST nvmf_nmic 00:16:59.383 ************************************ 00:16:59.383 14:25:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:59.383 14:25:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:59.383 14:25:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:59.383 14:25:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.383 ************************************ 00:16:59.383 START TEST nvmf_fio_target 00:16:59.383 ************************************ 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:59.383 * Looking for test storage... 00:16:59.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.383 14:25:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:05.979 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:05.979 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:05.979 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:05.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.979 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:17:05.979 00:17:05.979 --- 10.0.0.2 ping statistics --- 00:17:05.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.979 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:17:05.980 00:17:05.980 --- 10.0.0.1 ping statistics --- 00:17:05.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.980 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3013011 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3013011 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 3013011 ']' 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:05.980 14:25:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.980 [2024-06-10 14:25:43.478164] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:17:05.980 [2024-06-10 14:25:43.478217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.980 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.980 [2024-06-10 14:25:43.553678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.240 [2024-06-10 14:25:43.627018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.240 [2024-06-10 14:25:43.627057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.240 [2024-06-10 14:25:43.627065] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.240 [2024-06-10 14:25:43.627071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.240 [2024-06-10 14:25:43.627076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.240 [2024-06-10 14:25:43.628335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.240 [2024-06-10 14:25:43.628482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.240 [2024-06-10 14:25:43.628641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.240 [2024-06-10 14:25:43.628641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.811 14:25:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:07.073 [2024-06-10 14:25:44.582796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.073 14:25:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.334 14:25:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:07.334 14:25:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.594 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:07.594 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.853 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:07.853 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.853 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:07.853 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:08.114 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.375 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:08.375 14:25:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.635 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:08.635 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.896 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:08.896 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:09.156 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:09.156 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.156 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.415 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.415 14:25:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:09.676 14:25:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.936 [2024-06-10 14:25:47.365274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.936 14:25:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:10.197 14:25:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:10.459 14:25:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:17:11.846 14:25:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:17:14.393 14:25:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:14.393 [global] 00:17:14.393 thread=1 00:17:14.393 invalidate=1 00:17:14.393 rw=write 00:17:14.393 time_based=1 00:17:14.393 runtime=1 00:17:14.393 ioengine=libaio 00:17:14.393 direct=1 00:17:14.393 bs=4096 00:17:14.393 iodepth=1 00:17:14.393 norandommap=0 00:17:14.393 numjobs=1 00:17:14.393 00:17:14.393 verify_dump=1 00:17:14.393 verify_backlog=512 00:17:14.393 verify_state_save=0 00:17:14.393 do_verify=1 00:17:14.393 verify=crc32c-intel 00:17:14.393 [job0] 00:17:14.393 filename=/dev/nvme0n1 00:17:14.393 [job1] 00:17:14.393 filename=/dev/nvme0n2 00:17:14.393 [job2] 00:17:14.393 filename=/dev/nvme0n3 00:17:14.393 [job3] 00:17:14.393 filename=/dev/nvme0n4 00:17:14.393 Could not set queue depth (nvme0n1) 00:17:14.393 Could not set queue depth (nvme0n2) 00:17:14.393 Could not set queue depth (nvme0n3) 00:17:14.393 Could not set queue depth (nvme0n4) 00:17:14.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.393 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.393 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.393 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.393 fio-3.35 00:17:14.393 Starting 4 threads 00:17:15.776 00:17:15.776 job0: (groupid=0, jobs=1): err= 0: pid=3014914: Mon Jun 10 14:25:53 2024 00:17:15.776 read: IOPS=89, BW=358KiB/s (367kB/s)(372KiB/1039msec) 00:17:15.776 slat (nsec): min=24217, max=25803, avg=24849.27, stdev=231.02 00:17:15.776 clat (usec): min=717, max=42051, avg=7517.08, stdev=15028.57 00:17:15.776 lat (usec): min=742, max=42075, avg=7541.93, stdev=15028.66 00:17:15.776 clat percentiles (usec): 00:17:15.776 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 922], 00:17:15.776 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:17:15.776 | 70.00th=[ 1029], 80.00th=[ 1123], 90.00th=[41681], 95.00th=[41681], 00:17:15.776 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.776 | 99.99th=[42206] 00:17:15.776 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:17:15.776 slat (nsec): min=9760, max=52101, avg=30466.94, stdev=8408.76 00:17:15.776 clat (usec): min=265, max=928, avg=613.77, stdev=118.66 00:17:15.776 lat (usec): min=275, max=962, avg=644.24, stdev=122.46 00:17:15.776 clat percentiles (usec): 00:17:15.776 | 1.00th=[ 318], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 510], 00:17:15.776 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:17:15.776 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:17:15.776 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 930], 00:17:15.776 | 99.99th=[ 930] 00:17:15.776 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.776 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.776 lat (usec) : 500=14.55%, 750=61.65%, 1000=17.85% 00:17:15.776 lat (msec) : 2=3.47%, 50=2.48% 00:17:15.776 cpu : usr=1.16%, sys=1.45%, ctx=607, majf=0, minf=1 00:17:15.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.776 issued rwts: total=93,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.776 job1: (groupid=0, jobs=1): err= 0: pid=3014925: Mon Jun 10 14:25:53 2024 00:17:15.776 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:15.776 slat (nsec): min=6963, max=42149, avg=23972.53, stdev=2824.52 00:17:15.776 clat (usec): min=703, max=1213, avg=989.32, stdev=77.77 00:17:15.776 lat (usec): min=727, max=1237, avg=1013.29, stdev=77.74 00:17:15.776 clat percentiles (usec): 00:17:15.776 | 1.00th=[ 775], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:17:15.776 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:17:15.776 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:17:15.776 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:17:15.776 | 99.99th=[ 1221] 00:17:15.777 write: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec); 0 zone resets 00:17:15.777 slat (nsec): min=8994, max=55715, avg=26717.40, stdev=8714.66 00:17:15.777 clat (usec): min=235, max=829, avg=594.91, stdev=107.92 00:17:15.777 lat (usec): min=255, max=859, avg=621.63, stdev=112.14 00:17:15.777 clat percentiles (usec): 00:17:15.777 | 1.00th=[ 326], 5.00th=[ 375], 10.00th=[ 449], 20.00th=[ 506], 00:17:15.777 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:17:15.777 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 742], 00:17:15.777 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 832], 99.95th=[ 832], 00:17:15.777 | 99.99th=[ 832] 00:17:15.777 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.777 lat (usec) : 250=0.16%, 500=11.33%, 750=46.34%, 1000=22.42% 00:17:15.777 lat (msec) : 2=19.75% 00:17:15.777 cpu : usr=1.50%, sys=3.70%, ctx=1271, majf=0, minf=1 00:17:15.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 issued rwts: total=512,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.777 job2: (groupid=0, jobs=1): err= 0: pid=3014936: Mon Jun 10 14:25:53 2024 00:17:15.777 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:17:15.777 slat (nsec): min=25039, max=25764, avg=25252.35, stdev=171.31 00:17:15.777 clat (usec): min=1210, max=42036, avg=39561.61, stdev=9882.85 00:17:15.777 lat (usec): min=1236, max=42061, avg=39586.86, stdev=9882.81 00:17:15.777 clat percentiles (usec): 00:17:15.777 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41681], 20.00th=[41681], 00:17:15.777 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:15.777 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:15.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:15.777 | 99.99th=[42206] 00:17:15.777 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:17:15.777 slat (nsec): min=9703, max=50864, avg=28747.85, stdev=9947.41 00:17:15.777 clat (usec): min=248, max=984, avg=608.46, stdev=111.52 00:17:15.777 lat (usec): min=259, max=1017, avg=637.21, stdev=116.38 00:17:15.777 clat percentiles (usec): 00:17:15.777 | 1.00th=[ 351], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 502], 00:17:15.777 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:17:15.777 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 734], 95.00th=[ 758], 00:17:15.777 | 99.00th=[ 799], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:17:15.777 | 99.99th=[ 988] 00:17:15.777 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:17:15.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:15.777 lat (usec) : 250=0.19%, 500=18.53%, 750=72.40%, 1000=5.67% 00:17:15.777 lat (msec) : 2=0.19%, 50=3.02% 00:17:15.777 cpu : usr=0.90%, sys=1.29%, ctx=530, majf=0, minf=1 00:17:15.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.777 job3: (groupid=0, jobs=1): err= 0: pid=3014937: Mon Jun 10 14:25:53 2024 00:17:15.777 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:15.777 slat (nsec): min=7050, max=60587, avg=26903.65, stdev=3875.03 00:17:15.777 clat (usec): min=607, max=1607, avg=998.17, stdev=79.47 00:17:15.777 lat (usec): min=633, max=1633, avg=1025.08, stdev=79.45 00:17:15.777 clat percentiles (usec): 00:17:15.777 | 1.00th=[ 816], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 947], 00:17:15.777 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1020], 00:17:15.777 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:17:15.777 | 99.00th=[ 1172], 99.50th=[ 1287], 99.90th=[ 1614], 99.95th=[ 1614], 00:17:15.777 | 99.99th=[ 1614] 00:17:15.777 write: IOPS=816, BW=3265KiB/s (3343kB/s)(3268KiB/1001msec); 0 zone resets 00:17:15.777 slat (nsec): min=8956, max=69411, avg=29775.05, stdev=10397.71 00:17:15.777 clat (usec): min=174, max=1150, avg=535.42, stdev=131.39 00:17:15.777 lat (usec): min=208, max=1184, avg=565.19, stdev=135.16 00:17:15.777 clat percentiles (usec): 00:17:15.777 | 1.00th=[ 265], 5.00th=[ 314], 10.00th=[ 355], 20.00th=[ 424], 00:17:15.777 | 30.00th=[ 465], 40.00th=[ 506], 50.00th=[ 545], 60.00th=[ 578], 00:17:15.777 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 734], 00:17:15.777 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1156], 99.95th=[ 1156], 00:17:15.777 | 99.99th=[ 1156] 00:17:15.777 bw ( KiB/s): min= 4104, max= 4104, per=41.00%, avg=4104.00, stdev= 0.00, samples=1 00:17:15.777 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:17:15.777 lat (usec) : 250=0.23%, 500=23.78%, 750=35.36%, 1000=21.22% 00:17:15.777 lat (msec) : 2=19.41% 00:17:15.777 cpu : usr=3.00%, sys=4.70%, ctx=1330, majf=0, minf=1 00:17:15.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.777 issued rwts: total=512,817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.777 00:17:15.777 Run status group 0 (all jobs): 00:17:15.777 READ: bw=4366KiB/s (4471kB/s), 67.6KiB/s-2046KiB/s (69.2kB/s-2095kB/s), io=4536KiB (4645kB), run=1001-1039msec 00:17:15.777 WRITE: bw=9.77MiB/s (10.2MB/s), 1971KiB/s-3265KiB/s (2018kB/s-3343kB/s), io=10.2MiB (10.6MB), run=1001-1039msec 00:17:15.777 00:17:15.777 Disk stats (read/write): 00:17:15.777 nvme0n1: ios=112/512, merge=0/0, ticks=1454/296, in_queue=1750, util=96.49% 00:17:15.777 nvme0n2: ios=542/512, merge=0/0, ticks=625/302, in_queue=927, util=96.84% 00:17:15.777 nvme0n3: ios=35/512, merge=0/0, ticks=1425/299, in_queue=1724, util=96.94% 00:17:15.777 nvme0n4: ios=570/549, merge=0/0, ticks=1048/218, in_queue=1266, util=97.01% 00:17:15.777 14:25:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:15.777 [global] 00:17:15.777 thread=1 00:17:15.777 invalidate=1 00:17:15.777 rw=randwrite 00:17:15.777 time_based=1 00:17:15.777 runtime=1 00:17:15.777 ioengine=libaio 00:17:15.777 direct=1 00:17:15.777 bs=4096 00:17:15.777 iodepth=1 00:17:15.777 norandommap=0 00:17:15.777 numjobs=1 00:17:15.777 00:17:15.777 verify_dump=1 00:17:15.777 verify_backlog=512 00:17:15.777 verify_state_save=0 00:17:15.777 do_verify=1 00:17:15.777 verify=crc32c-intel 00:17:15.777 [job0] 00:17:15.777 filename=/dev/nvme0n1 00:17:15.777 [job1] 00:17:15.777 filename=/dev/nvme0n2 00:17:15.777 [job2] 00:17:15.777 filename=/dev/nvme0n3 00:17:15.777 [job3] 00:17:15.777 filename=/dev/nvme0n4 00:17:15.777 Could not set queue depth (nvme0n1) 00:17:15.777 Could not set queue depth (nvme0n2) 00:17:15.777 Could not set queue depth (nvme0n3) 00:17:15.777 Could not set queue depth (nvme0n4) 00:17:16.055 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.055 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.055 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.055 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.055 fio-3.35 00:17:16.055 Starting 4 threads 00:17:17.439 00:17:17.439 job0: (groupid=0, jobs=1): err= 0: pid=3015370: Mon Jun 10 14:25:54 2024 00:17:17.439 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:17.439 slat (nsec): min=6163, max=57346, avg=23232.34, stdev=4396.35 00:17:17.439 clat (usec): min=662, max=1163, avg=967.82, stdev=71.61 00:17:17.439 lat (usec): min=686, max=1187, avg=991.05, stdev=72.78 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 922], 00:17:17.439 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:17:17.439 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:17:17.439 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:17.439 | 99.99th=[ 1172] 00:17:17.439 write: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec); 0 zone resets 00:17:17.439 slat (nsec): min=8874, max=48710, avg=25092.14, stdev=9151.97 00:17:17.439 clat (usec): min=245, max=970, avg=594.83, stdev=109.31 00:17:17.439 lat (usec): min=255, max=1000, avg=619.92, stdev=112.98 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 347], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 490], 00:17:17.439 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 635], 00:17:17.439 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 750], 00:17:17.439 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 971], 99.95th=[ 971], 00:17:17.439 | 99.99th=[ 971] 00:17:17.439 bw ( KiB/s): min= 4096, max= 4096, per=46.03%, avg=4096.00, stdev= 0.00, samples=1 00:17:17.439 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:17.439 lat (usec) : 250=0.08%, 500=12.93%, 750=44.12%, 1000=29.64% 00:17:17.439 lat (msec) : 2=13.24% 00:17:17.439 cpu : usr=1.80%, sys=3.30%, ctx=1292, majf=0, minf=1 00:17:17.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 issued rwts: total=512,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.439 job1: (groupid=0, jobs=1): err= 0: pid=3015390: Mon Jun 10 14:25:54 2024 00:17:17.439 read: IOPS=17, BW=69.2KiB/s (70.8kB/s)(72.0KiB/1041msec) 00:17:17.439 slat (nsec): min=23779, max=24682, avg=24009.67, stdev=212.46 00:17:17.439 clat (usec): min=1027, max=42075, avg=39501.26, stdev=9609.65 00:17:17.439 lat (usec): min=1052, max=42099, avg=39525.27, stdev=9609.59 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[40633], 20.00th=[41157], 00:17:17.439 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:17.439 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:17.439 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:17.439 | 99.99th=[42206] 00:17:17.439 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:17.439 slat (nsec): min=9011, max=48661, avg=27705.03, stdev=8141.82 00:17:17.439 clat (usec): min=235, max=882, avg=607.14, stdev=113.03 00:17:17.439 lat (usec): min=254, max=911, avg=634.84, stdev=115.83 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 510], 00:17:17.439 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:17:17.439 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 775], 00:17:17.439 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 881], 99.95th=[ 881], 00:17:17.439 | 99.99th=[ 881] 00:17:17.439 bw ( KiB/s): min= 4096, max= 4096, per=46.03%, avg=4096.00, stdev= 0.00, samples=1 00:17:17.439 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:17.439 lat (usec) : 250=0.38%, 500=17.36%, 750=68.87%, 1000=10.00% 00:17:17.439 lat (msec) : 2=0.19%, 50=3.21% 00:17:17.439 cpu : usr=0.67%, sys=1.35%, ctx=530, majf=0, minf=1 00:17:17.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.439 job2: (groupid=0, jobs=1): err= 0: pid=3015408: Mon Jun 10 14:25:54 2024 00:17:17.439 read: IOPS=17, BW=69.8KiB/s (71.5kB/s)(72.0KiB/1031msec) 00:17:17.439 slat (nsec): min=10130, max=26998, avg=25369.00, stdev=3810.56 00:17:17.439 clat (usec): min=41874, max=42231, avg=41979.13, stdev=71.38 00:17:17.439 lat (usec): min=41901, max=42241, avg=42004.50, stdev=68.06 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:17:17.439 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:17.439 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:17.439 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:17.439 | 99.99th=[42206] 00:17:17.439 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:17:17.439 slat (nsec): min=8816, max=71312, avg=30549.16, stdev=8175.83 00:17:17.439 clat (usec): min=135, max=780, avg=497.12, stdev=114.43 00:17:17.439 lat (usec): min=167, max=813, avg=527.67, stdev=117.50 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 178], 5.00th=[ 293], 10.00th=[ 359], 20.00th=[ 392], 00:17:17.439 | 30.00th=[ 437], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[ 529], 00:17:17.439 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:17:17.439 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 783], 99.95th=[ 783], 00:17:17.439 | 99.99th=[ 783] 00:17:17.439 bw ( KiB/s): min= 4096, max= 4096, per=46.03%, avg=4096.00, stdev= 0.00, samples=1 00:17:17.439 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:17.439 lat (usec) : 250=2.45%, 500=43.58%, 750=50.38%, 1000=0.19% 00:17:17.439 lat (msec) : 50=3.40% 00:17:17.439 cpu : usr=1.17%, sys=1.84%, ctx=530, majf=0, minf=1 00:17:17.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.439 job3: (groupid=0, jobs=1): err= 0: pid=3015415: Mon Jun 10 14:25:54 2024 00:17:17.439 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1039msec) 00:17:17.439 slat (nsec): min=23879, max=24800, avg=24096.76, stdev=202.44 00:17:17.439 clat (usec): min=41028, max=42026, avg=41800.60, stdev=357.28 00:17:17.439 lat (usec): min=41052, max=42050, avg=41824.70, stdev=357.28 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:17.439 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:17.439 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:17.439 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:17.439 | 99.99th=[42206] 00:17:17.439 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:17:17.439 slat (nsec): min=9185, max=81396, avg=27131.14, stdev=8282.79 00:17:17.439 clat (usec): min=242, max=828, avg=604.94, stdev=108.83 00:17:17.439 lat (usec): min=273, max=860, avg=632.07, stdev=111.87 00:17:17.439 clat percentiles (usec): 00:17:17.439 | 1.00th=[ 326], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 515], 00:17:17.439 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 644], 00:17:17.439 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 758], 00:17:17.439 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 832], 00:17:17.439 | 99.99th=[ 832] 00:17:17.439 bw ( KiB/s): min= 4096, max= 4096, per=46.03%, avg=4096.00, stdev= 0.00, samples=1 00:17:17.439 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:17.439 lat (usec) : 250=0.19%, 500=17.01%, 750=73.35%, 1000=6.24% 00:17:17.439 lat (msec) : 50=3.21% 00:17:17.439 cpu : usr=0.67%, sys=1.35%, ctx=529, majf=0, minf=1 00:17:17.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:17.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.439 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:17.439 00:17:17.439 Run status group 0 (all jobs): 00:17:17.439 READ: bw=2171KiB/s (2223kB/s), 65.4KiB/s-2046KiB/s (67.0kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1041msec 00:17:17.439 WRITE: bw=8899KiB/s (9113kB/s), 1967KiB/s-3117KiB/s (2015kB/s-3192kB/s), io=9264KiB (9486kB), run=1001-1041msec 00:17:17.439 00:17:17.439 Disk stats (read/write): 00:17:17.439 nvme0n1: ios=562/512, merge=0/0, ticks=745/295, in_queue=1040, util=90.08% 00:17:17.439 nvme0n2: ios=45/512, merge=0/0, ticks=559/290, in_queue=849, util=87.70% 00:17:17.439 nvme0n3: ios=13/512, merge=0/0, ticks=546/180, in_queue=726, util=88.44% 00:17:17.439 nvme0n4: ios=12/512, merge=0/0, ticks=502/297, in_queue=799, util=89.48% 00:17:17.439 14:25:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:17.439 [global] 00:17:17.439 thread=1 00:17:17.439 invalidate=1 00:17:17.439 rw=write 00:17:17.439 time_based=1 00:17:17.439 runtime=1 00:17:17.439 ioengine=libaio 00:17:17.439 direct=1 00:17:17.440 bs=4096 00:17:17.440 iodepth=128 00:17:17.440 norandommap=0 00:17:17.440 numjobs=1 00:17:17.440 00:17:17.440 verify_dump=1 00:17:17.440 verify_backlog=512 00:17:17.440 verify_state_save=0 00:17:17.440 do_verify=1 00:17:17.440 verify=crc32c-intel 00:17:17.440 [job0] 00:17:17.440 filename=/dev/nvme0n1 00:17:17.440 [job1] 00:17:17.440 filename=/dev/nvme0n2 00:17:17.440 [job2] 00:17:17.440 filename=/dev/nvme0n3 00:17:17.440 [job3] 00:17:17.440 filename=/dev/nvme0n4 00:17:17.440 Could not set queue depth (nvme0n1) 00:17:17.440 Could not set queue depth (nvme0n2) 00:17:17.440 Could not set queue depth (nvme0n3) 00:17:17.440 Could not set queue depth (nvme0n4) 00:17:17.699 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.700 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.700 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.700 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:17.700 fio-3.35 00:17:17.700 Starting 4 threads 00:17:19.084 00:17:19.084 job0: (groupid=0, jobs=1): err= 0: pid=3015887: Mon Jun 10 14:25:56 2024 00:17:19.084 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:17:19.084 slat (nsec): min=1254, max=11277k, avg=121747.71, stdev=718083.27 00:17:19.084 clat (usec): min=9127, max=49462, avg=14734.14, stdev=5475.84 00:17:19.084 lat (usec): min=9132, max=49488, avg=14855.89, stdev=5535.79 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[10028], 5.00th=[11600], 10.00th=[11863], 20.00th=[12387], 00:17:19.084 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:17:19.084 | 70.00th=[13698], 80.00th=[14746], 90.00th=[18482], 95.00th=[27919], 00:17:19.084 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[44827], 00:17:19.084 | 99.99th=[49546] 00:17:19.084 write: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1008msec); 0 zone resets 00:17:19.084 slat (usec): min=2, max=9777, avg=185.90, stdev=871.75 00:17:19.084 clat (usec): min=4693, max=92998, avg=24986.48, stdev=17725.14 00:17:19.084 lat (usec): min=6436, max=93007, avg=25172.38, stdev=17827.10 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:17:19.084 | 30.00th=[13304], 40.00th=[17433], 50.00th=[18744], 60.00th=[19006], 00:17:19.084 | 70.00th=[25822], 80.00th=[39584], 90.00th=[49021], 95.00th=[58983], 00:17:19.084 | 99.00th=[88605], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:17:19.084 | 99.99th=[92799] 00:17:19.084 bw ( KiB/s): min= 9472, max=15864, per=16.87%, avg=12668.00, stdev=4519.83, samples=2 00:17:19.084 iops : min= 2368, max= 3966, avg=3167.00, stdev=1129.96, samples=2 00:17:19.084 lat (msec) : 10=0.79%, 20=76.72%, 50=18.08%, 100=4.41% 00:17:19.084 cpu : usr=2.48%, sys=3.18%, ctx=412, majf=0, minf=1 00:17:19.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:19.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.084 issued rwts: total=3072,3295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.084 job1: (groupid=0, jobs=1): err= 0: pid=3015900: Mon Jun 10 14:25:56 2024 00:17:19.084 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:17:19.084 slat (nsec): min=1311, max=10111k, avg=119336.37, stdev=803590.32 00:17:19.084 clat (usec): min=5397, max=49686, avg=13223.52, stdev=5633.76 00:17:19.084 lat (usec): min=5405, max=49693, avg=13342.86, stdev=5717.60 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 6259], 5.00th=[10028], 10.00th=[10421], 20.00th=[10552], 00:17:19.084 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:17:19.084 | 70.00th=[11600], 80.00th=[13173], 90.00th=[20317], 95.00th=[25035], 00:17:19.084 | 99.00th=[38536], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:17:19.084 | 99.99th=[49546] 00:17:19.084 write: IOPS=3790, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1005msec); 0 zone resets 00:17:19.084 slat (usec): min=2, max=8090, avg=145.63, stdev=621.04 00:17:19.084 clat (usec): min=1132, max=49668, avg=21030.47, stdev=9961.26 00:17:19.084 lat (usec): min=1144, max=49672, avg=21176.10, stdev=10029.00 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 4015], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10028], 00:17:19.084 | 30.00th=[14877], 40.00th=[18482], 50.00th=[19006], 60.00th=[22938], 00:17:19.084 | 70.00th=[27395], 80.00th=[31589], 90.00th=[33424], 95.00th=[36439], 00:17:19.084 | 99.00th=[44827], 99.50th=[45351], 99.90th=[45351], 99.95th=[49546], 00:17:19.084 | 99.99th=[49546] 00:17:19.084 bw ( KiB/s): min=13552, max=16016, per=19.68%, avg=14784.00, stdev=1742.31, samples=2 00:17:19.084 iops : min= 3388, max= 4004, avg=3696.00, stdev=435.58, samples=2 00:17:19.084 lat (msec) : 2=0.07%, 4=0.43%, 10=12.01%, 20=58.99%, 50=28.50% 00:17:19.084 cpu : usr=3.19%, sys=3.29%, ctx=447, majf=0, minf=1 00:17:19.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:19.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.084 issued rwts: total=3584,3809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.084 job2: (groupid=0, jobs=1): err= 0: pid=3015922: Mon Jun 10 14:25:56 2024 00:17:19.084 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:17:19.084 slat (nsec): min=1340, max=11503k, avg=94638.81, stdev=699906.86 00:17:19.084 clat (usec): min=4387, max=22811, avg=11792.57, stdev=2531.90 00:17:19.084 lat (usec): min=4392, max=22838, avg=11887.21, stdev=2591.85 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 7177], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:17:19.084 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:17:19.084 | 70.00th=[11731], 80.00th=[13173], 90.00th=[15270], 95.00th=[17171], 00:17:19.084 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:17:19.084 | 99.99th=[22938] 00:17:19.084 write: IOPS=5962, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1010msec); 0 zone resets 00:17:19.084 slat (usec): min=2, max=9093, avg=72.37, stdev=502.77 00:17:19.084 clat (usec): min=1284, max=21926, avg=10250.88, stdev=2435.11 00:17:19.084 lat (usec): min=1297, max=21930, avg=10323.25, stdev=2468.43 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 3752], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 8848], 00:17:19.084 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:17:19.084 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11994], 95.00th=[14615], 00:17:19.084 | 99.00th=[17171], 99.50th=[19268], 99.90th=[20317], 99.95th=[20579], 00:17:19.084 | 99.99th=[21890] 00:17:19.084 bw ( KiB/s): min=22600, max=24560, per=31.40%, avg=23580.00, stdev=1385.93, samples=2 00:17:19.084 iops : min= 5650, max= 6140, avg=5895.00, stdev=346.48, samples=2 00:17:19.084 lat (msec) : 2=0.07%, 4=0.68%, 10=22.31%, 20=76.39%, 50=0.56% 00:17:19.084 cpu : usr=5.65%, sys=4.86%, ctx=508, majf=0, minf=1 00:17:19.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:19.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.084 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.084 job3: (groupid=0, jobs=1): err= 0: pid=3015931: Mon Jun 10 14:25:56 2024 00:17:19.084 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:17:19.084 slat (nsec): min=1329, max=10519k, avg=99227.85, stdev=775982.51 00:17:19.084 clat (usec): min=3674, max=22580, avg=12069.12, stdev=2938.58 00:17:19.084 lat (usec): min=3678, max=22610, avg=12168.35, stdev=2986.59 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 4686], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10421], 00:17:19.084 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:17:19.084 | 70.00th=[12387], 80.00th=[14091], 90.00th=[16712], 95.00th=[18482], 00:17:19.084 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:17:19.084 | 99.99th=[22676] 00:17:19.084 write: IOPS=5780, BW=22.6MiB/s (23.7MB/s)(22.8MiB/1010msec); 0 zone resets 00:17:19.084 slat (usec): min=2, max=8639, avg=70.80, stdev=299.12 00:17:19.084 clat (usec): min=1164, max=21641, avg=10302.03, stdev=2229.64 00:17:19.084 lat (usec): min=1173, max=21646, avg=10372.83, stdev=2252.08 00:17:19.084 clat percentiles (usec): 00:17:19.084 | 1.00th=[ 3359], 5.00th=[ 5145], 10.00th=[ 6915], 20.00th=[ 9634], 00:17:19.084 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:17:19.084 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:17:19.084 | 99.00th=[16909], 99.50th=[17171], 99.90th=[20579], 99.95th=[20841], 00:17:19.084 | 99.99th=[21627] 00:17:19.084 bw ( KiB/s): min=21192, max=24496, per=30.42%, avg=22844.00, stdev=2336.28, samples=2 00:17:19.084 iops : min= 5298, max= 6124, avg=5711.00, stdev=584.07, samples=2 00:17:19.084 lat (msec) : 2=0.02%, 4=1.50%, 10=16.92%, 20=80.71%, 50=0.85% 00:17:19.084 cpu : usr=4.26%, sys=5.35%, ctx=753, majf=0, minf=1 00:17:19.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:19.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.084 issued rwts: total=5632,5838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.084 00:17:19.084 Run status group 0 (all jobs): 00:17:19.084 READ: bw=69.3MiB/s (72.7MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.8MB/s), io=70.0MiB (73.4MB), run=1005-1010msec 00:17:19.084 WRITE: bw=73.3MiB/s (76.9MB/s), 12.8MiB/s-23.3MiB/s (13.4MB/s-24.4MB/s), io=74.1MiB (77.7MB), run=1005-1010msec 00:17:19.084 00:17:19.084 Disk stats (read/write): 00:17:19.084 nvme0n1: ios=2610/2775, merge=0/0, ticks=18988/31428, in_queue=50416, util=90.78% 00:17:19.084 nvme0n2: ios=2973/3072, merge=0/0, ticks=37029/65375, in_queue=102404, util=88.36% 00:17:19.084 nvme0n3: ios=4647/4913, merge=0/0, ticks=52760/48222, in_queue=100982, util=91.95% 00:17:19.084 nvme0n4: ios=4630/4823, merge=0/0, ticks=54446/47977, in_queue=102423, util=96.78% 00:17:19.084 14:25:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:19.084 [global] 00:17:19.084 thread=1 00:17:19.084 invalidate=1 00:17:19.084 rw=randwrite 00:17:19.084 time_based=1 00:17:19.084 runtime=1 00:17:19.084 ioengine=libaio 00:17:19.084 direct=1 00:17:19.084 bs=4096 00:17:19.084 iodepth=128 00:17:19.084 norandommap=0 00:17:19.084 numjobs=1 00:17:19.084 00:17:19.084 verify_dump=1 00:17:19.084 verify_backlog=512 00:17:19.084 verify_state_save=0 00:17:19.084 do_verify=1 00:17:19.084 verify=crc32c-intel 00:17:19.084 [job0] 00:17:19.084 filename=/dev/nvme0n1 00:17:19.084 [job1] 00:17:19.084 filename=/dev/nvme0n2 00:17:19.084 [job2] 00:17:19.084 filename=/dev/nvme0n3 00:17:19.084 [job3] 00:17:19.085 filename=/dev/nvme0n4 00:17:19.085 Could not set queue depth (nvme0n1) 00:17:19.085 Could not set queue depth (nvme0n2) 00:17:19.085 Could not set queue depth (nvme0n3) 00:17:19.085 Could not set queue depth (nvme0n4) 00:17:19.345 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.345 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.345 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.345 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.345 fio-3.35 00:17:19.345 Starting 4 threads 00:17:20.729 00:17:20.729 job0: (groupid=0, jobs=1): err= 0: pid=3016370: Mon Jun 10 14:25:58 2024 00:17:20.729 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:17:20.729 slat (nsec): min=1238, max=31392k, avg=160320.76, stdev=1470185.32 00:17:20.729 clat (usec): min=3991, max=79195, avg=21144.29, stdev=16488.33 00:17:20.729 lat (usec): min=3995, max=80291, avg=21304.61, stdev=16645.19 00:17:20.729 clat percentiles (usec): 00:17:20.729 | 1.00th=[ 6456], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9896], 00:17:20.729 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11994], 60.00th=[15795], 00:17:20.729 | 70.00th=[21365], 80.00th=[38536], 90.00th=[50070], 95.00th=[58459], 00:17:20.729 | 99.00th=[65799], 99.50th=[71828], 99.90th=[73925], 99.95th=[76022], 00:17:20.729 | 99.99th=[79168] 00:17:20.729 write: IOPS=3437, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1003msec); 0 zone resets 00:17:20.729 slat (usec): min=2, max=24609, avg=142.05, stdev=1121.93 00:17:20.729 clat (usec): min=1283, max=65104, avg=18056.99, stdev=12835.86 00:17:20.729 lat (usec): min=1295, max=65133, avg=18199.04, stdev=12959.11 00:17:20.729 clat percentiles (usec): 00:17:20.729 | 1.00th=[ 4752], 5.00th=[ 6456], 10.00th=[ 7635], 20.00th=[ 8356], 00:17:20.729 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11600], 60.00th=[12649], 00:17:20.729 | 70.00th=[25035], 80.00th=[30016], 90.00th=[40633], 95.00th=[44303], 00:17:20.729 | 99.00th=[50594], 99.50th=[54789], 99.90th=[56886], 99.95th=[58459], 00:17:20.729 | 99.99th=[65274] 00:17:20.729 bw ( KiB/s): min= 8192, max=18376, per=15.88%, avg=13284.00, stdev=7201.18, samples=2 00:17:20.729 iops : min= 2048, max= 4594, avg=3321.00, stdev=1800.29, samples=2 00:17:20.729 lat (msec) : 2=0.14%, 4=0.05%, 10=27.25%, 20=40.64%, 50=26.20% 00:17:20.729 lat (msec) : 100=5.72% 00:17:20.729 cpu : usr=2.40%, sys=3.29%, ctx=244, majf=0, minf=1 00:17:20.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:17:20.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.729 issued rwts: total=3072,3448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.729 job1: (groupid=0, jobs=1): err= 0: pid=3016380: Mon Jun 10 14:25:58 2024 00:17:20.729 read: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec) 00:17:20.729 slat (nsec): min=1231, max=3595.0k, avg=51560.59, stdev=314345.42 00:17:20.729 clat (usec): min=3483, max=11440, avg=6709.14, stdev=989.25 00:17:20.729 lat (usec): min=3670, max=11452, avg=6760.70, stdev=1024.24 00:17:20.729 clat percentiles (usec): 00:17:20.729 | 1.00th=[ 4555], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 5932], 00:17:20.729 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 6980], 00:17:20.729 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7832], 95.00th=[ 8455], 00:17:20.729 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[10159], 99.95th=[10814], 00:17:20.729 | 99.99th=[11469] 00:17:20.729 write: IOPS=9482, BW=37.0MiB/s (38.8MB/s)(37.2MiB/1003msec); 0 zone resets 00:17:20.730 slat (usec): min=2, max=14264, avg=50.67, stdev=331.20 00:17:20.730 clat (usec): min=2589, max=33212, avg=6820.34, stdev=2596.90 00:17:20.730 lat (usec): min=2766, max=33250, avg=6871.01, stdev=2626.75 00:17:20.730 clat percentiles (usec): 00:17:20.730 | 1.00th=[ 3785], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 5604], 00:17:20.730 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 6718], 00:17:20.730 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7635], 95.00th=[ 9110], 00:17:20.730 | 99.00th=[23462], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:17:20.730 | 99.99th=[33162] 00:17:20.730 bw ( KiB/s): min=37432, max=37640, per=44.87%, avg=37536.00, stdev=147.08, samples=2 00:17:20.730 iops : min= 9358, max= 9410, avg=9384.00, stdev=36.77, samples=2 00:17:20.730 lat (msec) : 4=0.85%, 10=96.89%, 20=1.58%, 50=0.68% 00:17:20.730 cpu : usr=7.78%, sys=6.89%, ctx=1018, majf=0, minf=1 00:17:20.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:20.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.730 issued rwts: total=9216,9511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.730 job2: (groupid=0, jobs=1): err= 0: pid=3016399: Mon Jun 10 14:25:58 2024 00:17:20.730 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:17:20.730 slat (nsec): min=1972, max=15334k, avg=107417.40, stdev=871890.39 00:17:20.730 clat (usec): min=5503, max=45204, avg=15049.35, stdev=4646.52 00:17:20.730 lat (usec): min=5512, max=45229, avg=15156.77, stdev=4728.71 00:17:20.730 clat percentiles (usec): 00:17:20.730 | 1.00th=[ 7963], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12125], 00:17:20.730 | 30.00th=[12911], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:17:20.730 | 70.00th=[15926], 80.00th=[16909], 90.00th=[20317], 95.00th=[25822], 00:17:20.730 | 99.00th=[31065], 99.50th=[31065], 99.90th=[41157], 99.95th=[41157], 00:17:20.730 | 99.99th=[45351] 00:17:20.730 write: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1009msec); 0 zone resets 00:17:20.730 slat (nsec): min=1616, max=13170k, avg=103483.78, stdev=754718.52 00:17:20.730 clat (usec): min=509, max=54422, avg=14579.59, stdev=9162.25 00:17:20.730 lat (usec): min=520, max=54427, avg=14683.08, stdev=9237.56 00:17:20.730 clat percentiles (usec): 00:17:20.730 | 1.00th=[ 1713], 5.00th=[ 4424], 10.00th=[ 7767], 20.00th=[ 9503], 00:17:20.730 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11863], 60.00th=[13435], 00:17:20.730 | 70.00th=[14615], 80.00th=[17695], 90.00th=[26608], 95.00th=[39584], 00:17:20.730 | 99.00th=[46400], 99.50th=[49021], 99.90th=[54264], 99.95th=[54264], 00:17:20.730 | 99.99th=[54264] 00:17:20.730 bw ( KiB/s): min=14968, max=20480, per=21.19%, avg=17724.00, stdev=3897.57, samples=2 00:17:20.730 iops : min= 3742, max= 5120, avg=4431.00, stdev=974.39, samples=2 00:17:20.730 lat (usec) : 750=0.06%, 1000=0.12% 00:17:20.730 lat (msec) : 2=0.49%, 4=1.27%, 10=14.26%, 20=69.62%, 50=13.94% 00:17:20.730 lat (msec) : 100=0.25% 00:17:20.730 cpu : usr=3.57%, sys=5.26%, ctx=263, majf=0, minf=1 00:17:20.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:20.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.730 issued rwts: total=4096,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.730 job3: (groupid=0, jobs=1): err= 0: pid=3016406: Mon Jun 10 14:25:58 2024 00:17:20.730 read: IOPS=3311, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:17:20.730 slat (nsec): min=1420, max=39411k, avg=169021.84, stdev=1529475.10 00:17:20.730 clat (msec): min=3, max=110, avg=20.07, stdev=16.50 00:17:20.730 lat (msec): min=3, max=110, avg=20.24, stdev=16.67 00:17:20.730 clat percentiles (msec): 00:17:20.730 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:17:20.730 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 16], 00:17:20.730 | 70.00th=[ 22], 80.00th=[ 29], 90.00th=[ 37], 95.00th=[ 51], 00:17:20.730 | 99.00th=[ 89], 99.50th=[ 89], 99.90th=[ 106], 99.95th=[ 108], 00:17:20.730 | 99.99th=[ 111] 00:17:20.730 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:17:20.730 slat (usec): min=2, max=20859, avg=113.03, stdev=903.19 00:17:20.730 clat (usec): min=1211, max=104966, avg=16864.48, stdev=12548.72 00:17:20.730 lat (usec): min=1221, max=104975, avg=16977.51, stdev=12645.12 00:17:20.730 clat percentiles (msec): 00:17:20.730 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:17:20.730 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:17:20.730 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 30], 95.00th=[ 47], 00:17:20.730 | 99.00th=[ 66], 99.50th=[ 66], 99.90th=[ 86], 99.95th=[ 86], 00:17:20.730 | 99.99th=[ 106] 00:17:20.730 bw ( KiB/s): min=12288, max=16384, per=17.14%, avg=14336.00, stdev=2896.31, samples=2 00:17:20.730 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:20.730 lat (msec) : 2=0.07%, 4=0.75%, 10=15.13%, 20=54.90%, 50=24.21% 00:17:20.730 lat (msec) : 100=4.85%, 250=0.09% 00:17:20.730 cpu : usr=2.49%, sys=3.99%, ctx=290, majf=0, minf=1 00:17:20.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:20.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.730 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.730 00:17:20.730 Run status group 0 (all jobs): 00:17:20.730 READ: bw=76.3MiB/s (80.0MB/s), 12.0MiB/s-35.9MiB/s (12.5MB/s-37.6MB/s), io=77.0MiB (80.7MB), run=1003-1009msec 00:17:20.730 WRITE: bw=81.7MiB/s (85.7MB/s), 13.4MiB/s-37.0MiB/s (14.1MB/s-38.8MB/s), io=82.4MiB (86.4MB), run=1003-1009msec 00:17:20.730 00:17:20.730 Disk stats (read/write): 00:17:20.730 nvme0n1: ios=2610/2926, merge=0/0, ticks=27152/25696, in_queue=52848, util=91.18% 00:17:20.730 nvme0n2: ios=7721/7771, merge=0/0, ticks=25049/24670, in_queue=49719, util=88.89% 00:17:20.730 nvme0n3: ios=3584/4095, merge=0/0, ticks=52503/49628, in_queue=102131, util=88.42% 00:17:20.730 nvme0n4: ios=2609/2991, merge=0/0, ticks=38919/38614, in_queue=77533, util=100.00% 00:17:20.730 14:25:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:20.730 14:25:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3016527 00:17:20.730 14:25:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:20.730 14:25:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:20.730 [global] 00:17:20.730 thread=1 00:17:20.730 invalidate=1 00:17:20.730 rw=read 00:17:20.730 time_based=1 00:17:20.730 runtime=10 00:17:20.730 ioengine=libaio 00:17:20.730 direct=1 00:17:20.730 bs=4096 00:17:20.730 iodepth=1 00:17:20.730 norandommap=1 00:17:20.730 numjobs=1 00:17:20.730 00:17:20.730 [job0] 00:17:20.730 filename=/dev/nvme0n1 00:17:20.730 [job1] 00:17:20.730 filename=/dev/nvme0n2 00:17:20.730 [job2] 00:17:20.730 filename=/dev/nvme0n3 00:17:20.731 [job3] 00:17:20.731 filename=/dev/nvme0n4 00:17:20.731 Could not set queue depth (nvme0n1) 00:17:20.731 Could not set queue depth (nvme0n2) 00:17:20.731 Could not set queue depth (nvme0n3) 00:17:20.731 Could not set queue depth (nvme0n4) 00:17:20.991 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.991 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.991 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.991 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.991 fio-3.35 00:17:20.991 Starting 4 threads 00:17:23.558 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:23.887 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:23.887 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4431872, buflen=4096 00:17:23.887 fio: pid=3016872, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.148 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=282624, buflen=4096 00:17:24.148 fio: pid=3016866, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.148 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.148 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:24.148 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17092608, buflen=4096 00:17:24.148 fio: pid=3016829, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.410 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.410 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:24.410 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11599872, buflen=4096 00:17:24.410 fio: pid=3016846, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.410 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.410 14:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:24.410 00:17:24.410 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3016829: Mon Jun 10 14:26:01 2024 00:17:24.410 read: IOPS=1369, BW=5478KiB/s (5610kB/s)(16.3MiB/3047msec) 00:17:24.410 slat (usec): min=5, max=19726, avg=31.12, stdev=350.99 00:17:24.410 clat (usec): min=201, max=3860, avg=693.19, stdev=143.01 00:17:24.410 lat (usec): min=208, max=20524, avg=724.32, stdev=380.84 00:17:24.410 clat percentiles (usec): 00:17:24.410 | 1.00th=[ 351], 5.00th=[ 453], 10.00th=[ 510], 20.00th=[ 586], 00:17:24.410 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 709], 60.00th=[ 742], 00:17:24.410 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 873], 00:17:24.410 | 99.00th=[ 979], 99.50th=[ 1037], 99.90th=[ 1172], 99.95th=[ 2114], 00:17:24.410 | 99.99th=[ 3851] 00:17:24.410 bw ( KiB/s): min= 5280, max= 5888, per=55.13%, avg=5512.00, stdev=231.17, samples=5 00:17:24.410 iops : min= 1320, max= 1472, avg=1378.00, stdev=57.79, samples=5 00:17:24.410 lat (usec) : 250=0.05%, 500=9.15%, 750=53.91%, 1000=36.15% 00:17:24.410 lat (msec) : 2=0.65%, 4=0.07% 00:17:24.410 cpu : usr=1.77%, sys=5.38%, ctx=4176, majf=0, minf=1 00:17:24.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 issued rwts: total=4174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.410 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3016846: Mon Jun 10 14:26:01 2024 00:17:24.410 read: IOPS=868, BW=3472KiB/s (3555kB/s)(11.1MiB/3263msec) 00:17:24.410 slat (usec): min=6, max=17251, avg=44.27, stdev=442.16 00:17:24.410 clat (usec): min=571, max=1622, avg=1100.99, stdev=82.66 00:17:24.410 lat (usec): min=596, max=18382, avg=1145.27, stdev=449.54 00:17:24.410 clat percentiles (usec): 00:17:24.410 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1045], 00:17:24.410 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:17:24.410 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1221], 00:17:24.410 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:24.410 | 99.99th=[ 1631] 00:17:24.410 bw ( KiB/s): min= 3440, max= 3560, per=34.88%, avg=3487.83, stdev=46.55, samples=6 00:17:24.410 iops : min= 860, max= 890, avg=871.83, stdev=11.77, samples=6 00:17:24.410 lat (usec) : 750=0.32%, 1000=9.64% 00:17:24.410 lat (msec) : 2=90.01% 00:17:24.410 cpu : usr=1.69%, sys=3.25%, ctx=2840, majf=0, minf=1 00:17:24.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 issued rwts: total=2833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.410 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3016866: Mon Jun 10 14:26:01 2024 00:17:24.410 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(276KiB/2851msec) 00:17:24.410 slat (usec): min=25, max=16685, avg=264.47, stdev=1991.14 00:17:24.410 clat (usec): min=838, max=42951, avg=41029.33, stdev=4937.19 00:17:24.410 lat (usec): min=876, max=58996, avg=41297.25, stdev=5386.28 00:17:24.410 clat percentiles (usec): 00:17:24.410 | 1.00th=[ 840], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:24.410 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:24.410 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:24.410 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:24.410 | 99.99th=[42730] 00:17:24.410 bw ( KiB/s): min= 96, max= 104, per=0.97%, avg=97.60, stdev= 3.58, samples=5 00:17:24.410 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:24.410 lat (usec) : 1000=1.43% 00:17:24.410 lat (msec) : 50=97.14% 00:17:24.410 cpu : usr=0.00%, sys=0.14%, ctx=71, majf=0, minf=1 00:17:24.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.410 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.411 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3016872: Mon Jun 10 14:26:01 2024 00:17:24.411 read: IOPS=408, BW=1631KiB/s (1671kB/s)(4328KiB/2653msec) 00:17:24.411 slat (nsec): min=6520, max=63157, avg=25862.69, stdev=2892.09 00:17:24.411 clat (usec): min=274, max=42027, avg=2418.89, stdev=7612.46 00:17:24.411 lat (usec): min=301, max=42052, avg=2444.75, stdev=7612.15 00:17:24.411 clat percentiles (usec): 00:17:24.411 | 1.00th=[ 478], 5.00th=[ 627], 10.00th=[ 701], 20.00th=[ 783], 00:17:24.411 | 30.00th=[ 898], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1012], 00:17:24.411 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1156], 00:17:24.411 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:24.411 | 99.99th=[42206] 00:17:24.411 bw ( KiB/s): min= 96, max= 4168, per=17.24%, avg=1724.80, stdev=1943.76, samples=5 00:17:24.411 iops : min= 24, max= 1042, avg=431.20, stdev=485.94, samples=5 00:17:24.411 lat (usec) : 500=1.39%, 750=14.04%, 1000=39.80% 00:17:24.411 lat (msec) : 2=41.00%, 50=3.69% 00:17:24.411 cpu : usr=0.72%, sys=1.58%, ctx=1084, majf=0, minf=2 00:17:24.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:24.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.411 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:24.411 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:24.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:24.411 00:17:24.411 Run status group 0 (all jobs): 00:17:24.411 READ: bw=9998KiB/s (10.2MB/s), 96.8KiB/s-5478KiB/s (99.1kB/s-5610kB/s), io=31.9MiB (33.4MB), run=2653-3263msec 00:17:24.411 00:17:24.411 Disk stats (read/write): 00:17:24.411 nvme0n1: ios=3893/0, merge=0/0, ticks=2288/0, in_queue=2288, util=93.82% 00:17:24.411 nvme0n2: ios=2690/0, merge=0/0, ticks=2665/0, in_queue=2665, util=94.70% 00:17:24.411 nvme0n3: ios=68/0, merge=0/0, ticks=2791/0, in_queue=2791, util=95.86% 00:17:24.411 nvme0n4: ios=1080/0, merge=0/0, ticks=2464/0, in_queue=2464, util=96.43% 00:17:24.672 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.672 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:24.933 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.933 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:25.193 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.193 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:25.454 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.454 14:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:25.454 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:25.454 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3016527 00:17:25.454 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:25.454 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:25.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:25.715 nvmf hotplug test: fio failed as expected 00:17:25.715 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.975 rmmod nvme_tcp 00:17:25.975 rmmod nvme_fabrics 00:17:25.975 rmmod nvme_keyring 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3013011 ']' 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3013011 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 3013011 ']' 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 3013011 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3013011 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3013011' 00:17:25.975 killing process with pid 3013011 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 3013011 00:17:25.975 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 3013011 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.235 14:26:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.146 14:26:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.146 00:17:28.146 real 0m29.181s 00:17:28.146 user 2m37.284s 00:17:28.146 sys 0m8.729s 00:17:28.146 14:26:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:28.146 14:26:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 ************************************ 00:17:28.146 END TEST nvmf_fio_target 00:17:28.146 ************************************ 00:17:28.146 14:26:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:28.146 14:26:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:28.146 14:26:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:28.146 14:26:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.146 ************************************ 00:17:28.146 START TEST nvmf_bdevio 00:17:28.146 ************************************ 00:17:28.146 14:26:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:28.407 * Looking for test storage... 00:17:28.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.407 14:26:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.408 14:26:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:34.991 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:34.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:34.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:34.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:34.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.992 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.252 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.252 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:17:35.253 00:17:35.253 --- 10.0.0.2 ping statistics --- 00:17:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.253 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:17:35.253 00:17:35.253 --- 10.0.0.1 ping statistics --- 00:17:35.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.253 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3021980 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3021980 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 3021980 ']' 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:35.253 14:26:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:35.253 [2024-06-10 14:26:12.700866] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:17:35.253 [2024-06-10 14:26:12.700915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.253 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.253 [2024-06-10 14:26:12.782883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.513 [2024-06-10 14:26:12.847746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.513 [2024-06-10 14:26:12.847779] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.513 [2024-06-10 14:26:12.847786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.513 [2024-06-10 14:26:12.847792] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.513 [2024-06-10 14:26:12.847798] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.513 [2024-06-10 14:26:12.847937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.513 [2024-06-10 14:26:12.848076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:35.513 [2024-06-10 14:26:12.848229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.513 [2024-06-10 14:26:12.848230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 [2024-06-10 14:26:13.617487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 Malloc0 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.084 [2024-06-10 14:26:13.671040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:36.084 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:36.345 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:36.345 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:36.345 { 00:17:36.345 "params": { 00:17:36.345 "name": "Nvme$subsystem", 00:17:36.345 "trtype": "$TEST_TRANSPORT", 00:17:36.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.346 "adrfam": "ipv4", 00:17:36.346 "trsvcid": "$NVMF_PORT", 00:17:36.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.346 "hdgst": ${hdgst:-false}, 00:17:36.346 "ddgst": ${ddgst:-false} 00:17:36.346 }, 00:17:36.346 "method": "bdev_nvme_attach_controller" 00:17:36.346 } 00:17:36.346 EOF 00:17:36.346 )") 00:17:36.346 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:36.346 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:36.346 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:36.346 14:26:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:36.346 "params": { 00:17:36.346 "name": "Nvme1", 00:17:36.346 "trtype": "tcp", 00:17:36.346 "traddr": "10.0.0.2", 00:17:36.346 "adrfam": "ipv4", 00:17:36.346 "trsvcid": "4420", 00:17:36.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.346 "hdgst": false, 00:17:36.346 "ddgst": false 00:17:36.346 }, 00:17:36.346 "method": "bdev_nvme_attach_controller" 00:17:36.346 }' 00:17:36.346 [2024-06-10 14:26:13.723986] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:17:36.346 [2024-06-10 14:26:13.724054] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022088 ] 00:17:36.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.346 [2024-06-10 14:26:13.795617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.346 [2024-06-10 14:26:13.893815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.346 [2024-06-10 14:26:13.893957] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.346 [2024-06-10 14:26:13.893962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.605 I/O targets: 00:17:36.605 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:36.605 00:17:36.605 00:17:36.605 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.605 http://cunit.sourceforge.net/ 00:17:36.605 00:17:36.605 00:17:36.605 Suite: bdevio tests on: Nvme1n1 00:17:36.605 Test: blockdev write read block ...passed 00:17:36.605 Test: blockdev write zeroes read block ...passed 00:17:36.605 Test: blockdev write zeroes read no split ...passed 00:17:36.866 Test: blockdev write zeroes read split ...passed 00:17:36.866 Test: blockdev write zeroes read split partial ...passed 00:17:36.866 Test: blockdev reset ...[2024-06-10 14:26:14.299192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.866 [2024-06-10 14:26:14.299254] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdc560 (9): Bad file descriptor 00:17:36.866 [2024-06-10 14:26:14.403473] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:36.866 passed 00:17:36.866 Test: blockdev write read 8 blocks ...passed 00:17:36.866 Test: blockdev write read size > 128k ...passed 00:17:36.866 Test: blockdev write read invalid size ...passed 00:17:37.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:37.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:37.126 Test: blockdev write read max offset ...passed 00:17:37.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:37.126 Test: blockdev writev readv 8 blocks ...passed 00:17:37.126 Test: blockdev writev readv 30 x 1block ...passed 00:17:37.126 Test: blockdev writev readv block ...passed 00:17:37.126 Test: blockdev writev readv size > 128k ...passed 00:17:37.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:37.126 Test: blockdev comparev and writev ...[2024-06-10 14:26:14.670524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.670547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.670558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.670563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.671070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.671078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.671087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.671092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.671588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.671595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.671604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.671609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:37.126 [2024-06-10 14:26:14.672075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.126 [2024-06-10 14:26:14.672082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:37.127 [2024-06-10 14:26:14.672095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.127 [2024-06-10 14:26:14.672100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:37.127 passed 00:17:37.387 Test: blockdev nvme passthru rw ...passed 00:17:37.387 Test: blockdev nvme passthru vendor specific ...[2024-06-10 14:26:14.757019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.387 [2024-06-10 14:26:14.757029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:37.387 [2024-06-10 14:26:14.757349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.387 [2024-06-10 14:26:14.757356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:37.387 [2024-06-10 14:26:14.757690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.387 [2024-06-10 14:26:14.757697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:37.387 [2024-06-10 14:26:14.758012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.387 [2024-06-10 14:26:14.758019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:37.387 passed 00:17:37.387 Test: blockdev nvme admin passthru ...passed 00:17:37.387 Test: blockdev copy ...passed 00:17:37.387 00:17:37.387 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.387 suites 1 1 n/a 0 0 00:17:37.387 tests 23 23 23 0 0 00:17:37.387 asserts 152 152 152 0 n/a 00:17:37.387 00:17:37.387 Elapsed time = 1.439 seconds 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.387 14:26:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.387 rmmod nvme_tcp 00:17:37.387 rmmod nvme_fabrics 00:17:37.647 rmmod nvme_keyring 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3021980 ']' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 3021980 ']' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3021980' 00:17:37.647 killing process with pid 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 3021980 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.647 14:26:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.189 14:26:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.189 00:17:40.189 real 0m11.555s 00:17:40.189 user 0m13.879s 00:17:40.189 sys 0m5.485s 00:17:40.189 14:26:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:40.189 14:26:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.189 ************************************ 00:17:40.189 END TEST nvmf_bdevio 00:17:40.189 ************************************ 00:17:40.189 14:26:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:40.189 14:26:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:40.189 14:26:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:40.189 14:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.189 ************************************ 00:17:40.189 START TEST nvmf_auth_target 00:17:40.189 ************************************ 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:40.189 * Looking for test storage... 00:17:40.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.189 14:26:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.190 14:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.776 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:46.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:46.777 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:46.777 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:46.777 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.777 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:17:47.038 00:17:47.038 --- 10.0.0.2 ping statistics --- 00:17:47.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.038 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:47.038 00:17:47.038 --- 10.0.0.1 ping statistics --- 00:17:47.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.038 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3026548 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3026548 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3026548 ']' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:47.038 14:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3026765 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bf31999815280fc943b2fa9d1cc9ae09c0e2be07757c45e5 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gom 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bf31999815280fc943b2fa9d1cc9ae09c0e2be07757c45e5 0 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bf31999815280fc943b2fa9d1cc9ae09c0e2be07757c45e5 0 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bf31999815280fc943b2fa9d1cc9ae09c0e2be07757c45e5 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:47.979 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.gom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff8ffc0c649783558474f5877b542f87349adec833a6d68e5deb271124bea0ab 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xmw 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff8ffc0c649783558474f5877b542f87349adec833a6d68e5deb271124bea0ab 3 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff8ffc0c649783558474f5877b542f87349adec833a6d68e5deb271124bea0ab 3 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff8ffc0c649783558474f5877b542f87349adec833a6d68e5deb271124bea0ab 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xmw 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xmw 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xmw 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9ad620c80f340bc32095d62bed8e29e 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OLU 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9ad620c80f340bc32095d62bed8e29e 1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9ad620c80f340bc32095d62bed8e29e 1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9ad620c80f340bc32095d62bed8e29e 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OLU 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OLU 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.OLU 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87284abc6bd2b6c1621870e8ccd76315c72d61e578d00b8b 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tcg 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87284abc6bd2b6c1621870e8ccd76315c72d61e578d00b8b 2 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87284abc6bd2b6c1621870e8ccd76315c72d61e578d00b8b 2 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87284abc6bd2b6c1621870e8ccd76315c72d61e578d00b8b 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tcg 00:17:48.240 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tcg 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.tcg 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e225b8df2e180ced50d0f640dc6506ebcb3de57beeae3727 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Zh5 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e225b8df2e180ced50d0f640dc6506ebcb3de57beeae3727 2 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e225b8df2e180ced50d0f640dc6506ebcb3de57beeae3727 2 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e225b8df2e180ced50d0f640dc6506ebcb3de57beeae3727 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Zh5 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Zh5 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Zh5 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4758899b0036487d42ba1b7698e6935e 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Q6r 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4758899b0036487d42ba1b7698e6935e 1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4758899b0036487d42ba1b7698e6935e 1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4758899b0036487d42ba1b7698e6935e 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Q6r 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Q6r 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Q6r 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=54667f5f0413e9946cf4d9f7e2900fc5dae5ea10626cf6c2e1d4a4449629349d 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XZJ 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 54667f5f0413e9946cf4d9f7e2900fc5dae5ea10626cf6c2e1d4a4449629349d 3 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 54667f5f0413e9946cf4d9f7e2900fc5dae5ea10626cf6c2e1d4a4449629349d 3 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=54667f5f0413e9946cf4d9f7e2900fc5dae5ea10626cf6c2e1d4a4449629349d 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:48.501 14:26:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XZJ 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XZJ 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.XZJ 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3026548 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3026548 ']' 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:48.501 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.502 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:48.502 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3026765 /var/tmp/host.sock 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3026765 ']' 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:48.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:48.762 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gom 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gom 00:17:49.022 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gom 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xmw ]] 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmw 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmw 00:17:49.281 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmw 00:17:49.541 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:49.541 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OLU 00:17:49.541 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.541 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.542 14:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.542 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OLU 00:17:49.542 14:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OLU 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.tcg ]] 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tcg 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tcg 00:17:49.542 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tcg 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Zh5 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.802 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Zh5 00:17:49.803 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Zh5 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Q6r ]] 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6r 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6r 00:17:50.063 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6r 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XZJ 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XZJ 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XZJ 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.324 14:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.584 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:50.584 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.585 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.846 00:17:50.846 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.846 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.846 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.106 { 00:17:51.106 "cntlid": 1, 00:17:51.106 "qid": 0, 00:17:51.106 "state": "enabled", 00:17:51.106 "listen_address": { 00:17:51.106 "trtype": "TCP", 00:17:51.106 "adrfam": "IPv4", 00:17:51.106 "traddr": "10.0.0.2", 00:17:51.106 "trsvcid": "4420" 00:17:51.106 }, 00:17:51.106 "peer_address": { 00:17:51.106 "trtype": "TCP", 00:17:51.106 "adrfam": "IPv4", 00:17:51.106 "traddr": "10.0.0.1", 00:17:51.106 "trsvcid": "36928" 00:17:51.106 }, 00:17:51.106 "auth": { 00:17:51.106 "state": "completed", 00:17:51.106 "digest": "sha256", 00:17:51.106 "dhgroup": "null" 00:17:51.106 } 00:17:51.106 } 00:17:51.106 ]' 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.106 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.367 14:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.019 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.280 14:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.540 00:17:52.540 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.540 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.540 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.800 { 00:17:52.800 "cntlid": 3, 00:17:52.800 "qid": 0, 00:17:52.800 "state": "enabled", 00:17:52.800 "listen_address": { 00:17:52.800 "trtype": "TCP", 00:17:52.800 "adrfam": "IPv4", 00:17:52.800 "traddr": "10.0.0.2", 00:17:52.800 "trsvcid": "4420" 00:17:52.800 }, 00:17:52.800 "peer_address": { 00:17:52.800 "trtype": "TCP", 00:17:52.800 "adrfam": "IPv4", 00:17:52.800 "traddr": "10.0.0.1", 00:17:52.800 "trsvcid": "36958" 00:17:52.800 }, 00:17:52.800 "auth": { 00:17:52.800 "state": "completed", 00:17:52.800 "digest": "sha256", 00:17:52.800 "dhgroup": "null" 00:17:52.800 } 00:17:52.800 } 00:17:52.800 ]' 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.800 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.060 14:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.001 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.262 00:17:54.262 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.262 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.262 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.523 { 00:17:54.523 "cntlid": 5, 00:17:54.523 "qid": 0, 00:17:54.523 "state": "enabled", 00:17:54.523 "listen_address": { 00:17:54.523 "trtype": "TCP", 00:17:54.523 "adrfam": "IPv4", 00:17:54.523 "traddr": "10.0.0.2", 00:17:54.523 "trsvcid": "4420" 00:17:54.523 }, 00:17:54.523 "peer_address": { 00:17:54.523 "trtype": "TCP", 00:17:54.523 "adrfam": "IPv4", 00:17:54.523 "traddr": "10.0.0.1", 00:17:54.523 "trsvcid": "39440" 00:17:54.523 }, 00:17:54.523 "auth": { 00:17:54.523 "state": "completed", 00:17:54.523 "digest": "sha256", 00:17:54.523 "dhgroup": "null" 00:17:54.523 } 00:17:54.523 } 00:17:54.523 ]' 00:17:54.523 14:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.523 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.523 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.523 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.523 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.783 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.783 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.783 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.783 14:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.722 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.982 00:17:55.982 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.982 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.982 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.241 { 00:17:56.241 "cntlid": 7, 00:17:56.241 "qid": 0, 00:17:56.241 "state": "enabled", 00:17:56.241 "listen_address": { 00:17:56.241 "trtype": "TCP", 00:17:56.241 "adrfam": "IPv4", 00:17:56.241 "traddr": "10.0.0.2", 00:17:56.241 "trsvcid": "4420" 00:17:56.241 }, 00:17:56.241 "peer_address": { 00:17:56.241 "trtype": "TCP", 00:17:56.241 "adrfam": "IPv4", 00:17:56.241 "traddr": "10.0.0.1", 00:17:56.241 "trsvcid": "39466" 00:17:56.241 }, 00:17:56.241 "auth": { 00:17:56.241 "state": "completed", 00:17:56.241 "digest": "sha256", 00:17:56.241 "dhgroup": "null" 00:17:56.241 } 00:17:56.241 } 00:17:56.241 ]' 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.241 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.501 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.501 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.501 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.501 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.501 14:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.760 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.330 14:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.590 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.591 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.851 00:17:57.851 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.851 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.851 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.110 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.110 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.110 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.110 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.110 14:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.111 { 00:17:58.111 "cntlid": 9, 00:17:58.111 "qid": 0, 00:17:58.111 "state": "enabled", 00:17:58.111 "listen_address": { 00:17:58.111 "trtype": "TCP", 00:17:58.111 "adrfam": "IPv4", 00:17:58.111 "traddr": "10.0.0.2", 00:17:58.111 "trsvcid": "4420" 00:17:58.111 }, 00:17:58.111 "peer_address": { 00:17:58.111 "trtype": "TCP", 00:17:58.111 "adrfam": "IPv4", 00:17:58.111 "traddr": "10.0.0.1", 00:17:58.111 "trsvcid": "39486" 00:17:58.111 }, 00:17:58.111 "auth": { 00:17:58.111 "state": "completed", 00:17:58.111 "digest": "sha256", 00:17:58.111 "dhgroup": "ffdhe2048" 00:17:58.111 } 00:17:58.111 } 00:17:58.111 ]' 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.111 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.370 14:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.312 14:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.572 00:17:59.572 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.572 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.572 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.831 { 00:17:59.831 "cntlid": 11, 00:17:59.831 "qid": 0, 00:17:59.831 "state": "enabled", 00:17:59.831 "listen_address": { 00:17:59.831 "trtype": "TCP", 00:17:59.831 "adrfam": "IPv4", 00:17:59.831 "traddr": "10.0.0.2", 00:17:59.831 "trsvcid": "4420" 00:17:59.831 }, 00:17:59.831 "peer_address": { 00:17:59.831 "trtype": "TCP", 00:17:59.831 "adrfam": "IPv4", 00:17:59.831 "traddr": "10.0.0.1", 00:17:59.831 "trsvcid": "39516" 00:17:59.831 }, 00:17:59.831 "auth": { 00:17:59.831 "state": "completed", 00:17:59.831 "digest": "sha256", 00:17:59.831 "dhgroup": "ffdhe2048" 00:17:59.831 } 00:17:59.831 } 00:17:59.831 ]' 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.831 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.091 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.091 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.091 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.091 14:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.030 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.290 00:18:01.550 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.550 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.550 14:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.550 { 00:18:01.550 "cntlid": 13, 00:18:01.550 "qid": 0, 00:18:01.550 "state": "enabled", 00:18:01.550 "listen_address": { 00:18:01.550 "trtype": "TCP", 00:18:01.550 "adrfam": "IPv4", 00:18:01.550 "traddr": "10.0.0.2", 00:18:01.550 "trsvcid": "4420" 00:18:01.550 }, 00:18:01.550 "peer_address": { 00:18:01.550 "trtype": "TCP", 00:18:01.550 "adrfam": "IPv4", 00:18:01.550 "traddr": "10.0.0.1", 00:18:01.550 "trsvcid": "39536" 00:18:01.550 }, 00:18:01.550 "auth": { 00:18:01.550 "state": "completed", 00:18:01.550 "digest": "sha256", 00:18:01.550 "dhgroup": "ffdhe2048" 00:18:01.550 } 00:18:01.550 } 00:18:01.550 ]' 00:18:01.550 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.810 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.071 14:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.642 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.900 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.159 00:18:03.159 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.159 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.159 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.420 { 00:18:03.420 "cntlid": 15, 00:18:03.420 "qid": 0, 00:18:03.420 "state": "enabled", 00:18:03.420 "listen_address": { 00:18:03.420 "trtype": "TCP", 00:18:03.420 "adrfam": "IPv4", 00:18:03.420 "traddr": "10.0.0.2", 00:18:03.420 "trsvcid": "4420" 00:18:03.420 }, 00:18:03.420 "peer_address": { 00:18:03.420 "trtype": "TCP", 00:18:03.420 "adrfam": "IPv4", 00:18:03.420 "traddr": "10.0.0.1", 00:18:03.420 "trsvcid": "57574" 00:18:03.420 }, 00:18:03.420 "auth": { 00:18:03.420 "state": "completed", 00:18:03.420 "digest": "sha256", 00:18:03.420 "dhgroup": "ffdhe2048" 00:18:03.420 } 00:18:03.420 } 00:18:03.420 ]' 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.420 14:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.679 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.617 14:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.617 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.618 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.876 00:18:04.876 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.876 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.876 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.136 { 00:18:05.136 "cntlid": 17, 00:18:05.136 "qid": 0, 00:18:05.136 "state": "enabled", 00:18:05.136 "listen_address": { 00:18:05.136 "trtype": "TCP", 00:18:05.136 "adrfam": "IPv4", 00:18:05.136 "traddr": "10.0.0.2", 00:18:05.136 "trsvcid": "4420" 00:18:05.136 }, 00:18:05.136 "peer_address": { 00:18:05.136 "trtype": "TCP", 00:18:05.136 "adrfam": "IPv4", 00:18:05.136 "traddr": "10.0.0.1", 00:18:05.136 "trsvcid": "57610" 00:18:05.136 }, 00:18:05.136 "auth": { 00:18:05.136 "state": "completed", 00:18:05.136 "digest": "sha256", 00:18:05.136 "dhgroup": "ffdhe3072" 00:18:05.136 } 00:18:05.136 } 00:18:05.136 ]' 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.136 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.396 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.396 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.396 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.396 14:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:06.335 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.336 14:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.595 00:18:06.595 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.595 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.595 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.855 { 00:18:06.855 "cntlid": 19, 00:18:06.855 "qid": 0, 00:18:06.855 "state": "enabled", 00:18:06.855 "listen_address": { 00:18:06.855 "trtype": "TCP", 00:18:06.855 "adrfam": "IPv4", 00:18:06.855 "traddr": "10.0.0.2", 00:18:06.855 "trsvcid": "4420" 00:18:06.855 }, 00:18:06.855 "peer_address": { 00:18:06.855 "trtype": "TCP", 00:18:06.855 "adrfam": "IPv4", 00:18:06.855 "traddr": "10.0.0.1", 00:18:06.855 "trsvcid": "57632" 00:18:06.855 }, 00:18:06.855 "auth": { 00:18:06.855 "state": "completed", 00:18:06.855 "digest": "sha256", 00:18:06.855 "dhgroup": "ffdhe3072" 00:18:06.855 } 00:18:06.855 } 00:18:06.855 ]' 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.855 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.139 14:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.079 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.339 00:18:08.339 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.339 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.339 14:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.600 { 00:18:08.600 "cntlid": 21, 00:18:08.600 "qid": 0, 00:18:08.600 "state": "enabled", 00:18:08.600 "listen_address": { 00:18:08.600 "trtype": "TCP", 00:18:08.600 "adrfam": "IPv4", 00:18:08.600 "traddr": "10.0.0.2", 00:18:08.600 "trsvcid": "4420" 00:18:08.600 }, 00:18:08.600 "peer_address": { 00:18:08.600 "trtype": "TCP", 00:18:08.600 "adrfam": "IPv4", 00:18:08.600 "traddr": "10.0.0.1", 00:18:08.600 "trsvcid": "57666" 00:18:08.600 }, 00:18:08.600 "auth": { 00:18:08.600 "state": "completed", 00:18:08.600 "digest": "sha256", 00:18:08.600 "dhgroup": "ffdhe3072" 00:18:08.600 } 00:18:08.600 } 00:18:08.600 ]' 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.600 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.860 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.860 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.860 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.860 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.860 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.119 14:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.689 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.949 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.209 00:18:10.209 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.209 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.209 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.470 { 00:18:10.470 "cntlid": 23, 00:18:10.470 "qid": 0, 00:18:10.470 "state": "enabled", 00:18:10.470 "listen_address": { 00:18:10.470 "trtype": "TCP", 00:18:10.470 "adrfam": "IPv4", 00:18:10.470 "traddr": "10.0.0.2", 00:18:10.470 "trsvcid": "4420" 00:18:10.470 }, 00:18:10.470 "peer_address": { 00:18:10.470 "trtype": "TCP", 00:18:10.470 "adrfam": "IPv4", 00:18:10.470 "traddr": "10.0.0.1", 00:18:10.470 "trsvcid": "57698" 00:18:10.470 }, 00:18:10.470 "auth": { 00:18:10.470 "state": "completed", 00:18:10.470 "digest": "sha256", 00:18:10.470 "dhgroup": "ffdhe3072" 00:18:10.470 } 00:18:10.470 } 00:18:10.470 ]' 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.470 14:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.470 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.470 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.470 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.731 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.731 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.731 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.731 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:11.736 14:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.736 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.996 00:18:11.996 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.996 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.997 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.256 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.256 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.256 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.256 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.257 14:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.257 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.257 { 00:18:12.257 "cntlid": 25, 00:18:12.257 "qid": 0, 00:18:12.257 "state": "enabled", 00:18:12.257 "listen_address": { 00:18:12.257 "trtype": "TCP", 00:18:12.257 "adrfam": "IPv4", 00:18:12.257 "traddr": "10.0.0.2", 00:18:12.257 "trsvcid": "4420" 00:18:12.257 }, 00:18:12.257 "peer_address": { 00:18:12.257 "trtype": "TCP", 00:18:12.257 "adrfam": "IPv4", 00:18:12.257 "traddr": "10.0.0.1", 00:18:12.257 "trsvcid": "57720" 00:18:12.257 }, 00:18:12.257 "auth": { 00:18:12.257 "state": "completed", 00:18:12.257 "digest": "sha256", 00:18:12.257 "dhgroup": "ffdhe4096" 00:18:12.257 } 00:18:12.257 } 00:18:12.257 ]' 00:18:12.257 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.257 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.257 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.516 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.516 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.516 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.516 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.516 14:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.776 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:13.345 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.345 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.345 14:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.345 14:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.345 14:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.346 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.346 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.346 14:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.606 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.865 00:18:13.865 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.865 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.865 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.125 { 00:18:14.125 "cntlid": 27, 00:18:14.125 "qid": 0, 00:18:14.125 "state": "enabled", 00:18:14.125 "listen_address": { 00:18:14.125 "trtype": "TCP", 00:18:14.125 "adrfam": "IPv4", 00:18:14.125 "traddr": "10.0.0.2", 00:18:14.125 "trsvcid": "4420" 00:18:14.125 }, 00:18:14.125 "peer_address": { 00:18:14.125 "trtype": "TCP", 00:18:14.125 "adrfam": "IPv4", 00:18:14.125 "traddr": "10.0.0.1", 00:18:14.125 "trsvcid": "41118" 00:18:14.125 }, 00:18:14.125 "auth": { 00:18:14.125 "state": "completed", 00:18:14.125 "digest": "sha256", 00:18:14.125 "dhgroup": "ffdhe4096" 00:18:14.125 } 00:18:14.125 } 00:18:14.125 ]' 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.125 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.385 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.385 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.385 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.385 14:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.325 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.326 14:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.586 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.845 { 00:18:15.845 "cntlid": 29, 00:18:15.845 "qid": 0, 00:18:15.845 "state": "enabled", 00:18:15.845 "listen_address": { 00:18:15.845 "trtype": "TCP", 00:18:15.845 "adrfam": "IPv4", 00:18:15.845 "traddr": "10.0.0.2", 00:18:15.845 "trsvcid": "4420" 00:18:15.845 }, 00:18:15.845 "peer_address": { 00:18:15.845 "trtype": "TCP", 00:18:15.845 "adrfam": "IPv4", 00:18:15.845 "traddr": "10.0.0.1", 00:18:15.845 "trsvcid": "41140" 00:18:15.845 }, 00:18:15.845 "auth": { 00:18:15.845 "state": "completed", 00:18:15.845 "digest": "sha256", 00:18:15.845 "dhgroup": "ffdhe4096" 00:18:15.845 } 00:18:15.845 } 00:18:15.845 ]' 00:18:15.845 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.105 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.365 14:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.982 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.242 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.502 00:18:17.502 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.502 14:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.502 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.762 { 00:18:17.762 "cntlid": 31, 00:18:17.762 "qid": 0, 00:18:17.762 "state": "enabled", 00:18:17.762 "listen_address": { 00:18:17.762 "trtype": "TCP", 00:18:17.762 "adrfam": "IPv4", 00:18:17.762 "traddr": "10.0.0.2", 00:18:17.762 "trsvcid": "4420" 00:18:17.762 }, 00:18:17.762 "peer_address": { 00:18:17.762 "trtype": "TCP", 00:18:17.762 "adrfam": "IPv4", 00:18:17.762 "traddr": "10.0.0.1", 00:18:17.762 "trsvcid": "41164" 00:18:17.762 }, 00:18:17.762 "auth": { 00:18:17.762 "state": "completed", 00:18:17.762 "digest": "sha256", 00:18:17.762 "dhgroup": "ffdhe4096" 00:18:17.762 } 00:18:17.762 } 00:18:17.762 ]' 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.762 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.021 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.021 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.021 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.021 14:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.960 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.529 00:18:19.529 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.529 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.529 14:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.788 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.788 { 00:18:19.788 "cntlid": 33, 00:18:19.788 "qid": 0, 00:18:19.788 "state": "enabled", 00:18:19.788 "listen_address": { 00:18:19.788 "trtype": "TCP", 00:18:19.788 "adrfam": "IPv4", 00:18:19.788 "traddr": "10.0.0.2", 00:18:19.788 "trsvcid": "4420" 00:18:19.788 }, 00:18:19.788 "peer_address": { 00:18:19.788 "trtype": "TCP", 00:18:19.788 "adrfam": "IPv4", 00:18:19.788 "traddr": "10.0.0.1", 00:18:19.788 "trsvcid": "41194" 00:18:19.788 }, 00:18:19.788 "auth": { 00:18:19.788 "state": "completed", 00:18:19.788 "digest": "sha256", 00:18:19.788 "dhgroup": "ffdhe6144" 00:18:19.788 } 00:18:19.789 } 00:18:19.789 ]' 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.789 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.048 14:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:20.618 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.618 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.878 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.447 00:18:21.447 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.447 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.447 14:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.707 { 00:18:21.707 "cntlid": 35, 00:18:21.707 "qid": 0, 00:18:21.707 "state": "enabled", 00:18:21.707 "listen_address": { 00:18:21.707 "trtype": "TCP", 00:18:21.707 "adrfam": "IPv4", 00:18:21.707 "traddr": "10.0.0.2", 00:18:21.707 "trsvcid": "4420" 00:18:21.707 }, 00:18:21.707 "peer_address": { 00:18:21.707 "trtype": "TCP", 00:18:21.707 "adrfam": "IPv4", 00:18:21.707 "traddr": "10.0.0.1", 00:18:21.707 "trsvcid": "41206" 00:18:21.707 }, 00:18:21.707 "auth": { 00:18:21.707 "state": "completed", 00:18:21.707 "digest": "sha256", 00:18:21.707 "dhgroup": "ffdhe6144" 00:18:21.707 } 00:18:21.707 } 00:18:21.707 ]' 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.707 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.967 14:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:22.537 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.796 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.797 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.366 00:18:23.366 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.366 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.366 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.625 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.625 14:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.625 { 00:18:23.625 "cntlid": 37, 00:18:23.625 "qid": 0, 00:18:23.625 "state": "enabled", 00:18:23.625 "listen_address": { 00:18:23.625 "trtype": "TCP", 00:18:23.625 "adrfam": "IPv4", 00:18:23.625 "traddr": "10.0.0.2", 00:18:23.625 "trsvcid": "4420" 00:18:23.625 }, 00:18:23.625 "peer_address": { 00:18:23.625 "trtype": "TCP", 00:18:23.625 "adrfam": "IPv4", 00:18:23.625 "traddr": "10.0.0.1", 00:18:23.625 "trsvcid": "47260" 00:18:23.625 }, 00:18:23.625 "auth": { 00:18:23.625 "state": "completed", 00:18:23.625 "digest": "sha256", 00:18:23.625 "dhgroup": "ffdhe6144" 00:18:23.625 } 00:18:23.625 } 00:18:23.625 ]' 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.625 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.885 14:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.823 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.392 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.392 { 00:18:25.392 "cntlid": 39, 00:18:25.392 "qid": 0, 00:18:25.392 "state": "enabled", 00:18:25.392 "listen_address": { 00:18:25.392 "trtype": "TCP", 00:18:25.392 "adrfam": "IPv4", 00:18:25.392 "traddr": "10.0.0.2", 00:18:25.392 "trsvcid": "4420" 00:18:25.392 }, 00:18:25.392 "peer_address": { 00:18:25.392 "trtype": "TCP", 00:18:25.392 "adrfam": "IPv4", 00:18:25.392 "traddr": "10.0.0.1", 00:18:25.392 "trsvcid": "47286" 00:18:25.392 }, 00:18:25.392 "auth": { 00:18:25.392 "state": "completed", 00:18:25.392 "digest": "sha256", 00:18:25.392 "dhgroup": "ffdhe6144" 00:18:25.392 } 00:18:25.392 } 00:18:25.392 ]' 00:18:25.392 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.651 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.651 14:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.651 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.651 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.651 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.651 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.651 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.911 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.480 14:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.740 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.309 00:18:27.309 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.309 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.309 14:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.568 { 00:18:27.568 "cntlid": 41, 00:18:27.568 "qid": 0, 00:18:27.568 "state": "enabled", 00:18:27.568 "listen_address": { 00:18:27.568 "trtype": "TCP", 00:18:27.568 "adrfam": "IPv4", 00:18:27.568 "traddr": "10.0.0.2", 00:18:27.568 "trsvcid": "4420" 00:18:27.568 }, 00:18:27.568 "peer_address": { 00:18:27.568 "trtype": "TCP", 00:18:27.568 "adrfam": "IPv4", 00:18:27.568 "traddr": "10.0.0.1", 00:18:27.568 "trsvcid": "47318" 00:18:27.568 }, 00:18:27.568 "auth": { 00:18:27.568 "state": "completed", 00:18:27.568 "digest": "sha256", 00:18:27.568 "dhgroup": "ffdhe8192" 00:18:27.568 } 00:18:27.568 } 00:18:27.568 ]' 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.568 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.828 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.828 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.828 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.829 14:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.767 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.768 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.336 00:18:29.597 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.597 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.597 14:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.597 { 00:18:29.597 "cntlid": 43, 00:18:29.597 "qid": 0, 00:18:29.597 "state": "enabled", 00:18:29.597 "listen_address": { 00:18:29.597 "trtype": "TCP", 00:18:29.597 "adrfam": "IPv4", 00:18:29.597 "traddr": "10.0.0.2", 00:18:29.597 "trsvcid": "4420" 00:18:29.597 }, 00:18:29.597 "peer_address": { 00:18:29.597 "trtype": "TCP", 00:18:29.597 "adrfam": "IPv4", 00:18:29.597 "traddr": "10.0.0.1", 00:18:29.597 "trsvcid": "47352" 00:18:29.597 }, 00:18:29.597 "auth": { 00:18:29.597 "state": "completed", 00:18:29.597 "digest": "sha256", 00:18:29.597 "dhgroup": "ffdhe8192" 00:18:29.597 } 00:18:29.597 } 00:18:29.597 ]' 00:18:29.597 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.857 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.120 14:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.689 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.948 14:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.570 00:18:31.570 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.570 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.570 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.830 { 00:18:31.830 "cntlid": 45, 00:18:31.830 "qid": 0, 00:18:31.830 "state": "enabled", 00:18:31.830 "listen_address": { 00:18:31.830 "trtype": "TCP", 00:18:31.830 "adrfam": "IPv4", 00:18:31.830 "traddr": "10.0.0.2", 00:18:31.830 "trsvcid": "4420" 00:18:31.830 }, 00:18:31.830 "peer_address": { 00:18:31.830 "trtype": "TCP", 00:18:31.830 "adrfam": "IPv4", 00:18:31.830 "traddr": "10.0.0.1", 00:18:31.830 "trsvcid": "47376" 00:18:31.830 }, 00:18:31.830 "auth": { 00:18:31.830 "state": "completed", 00:18:31.830 "digest": "sha256", 00:18:31.830 "dhgroup": "ffdhe8192" 00:18:31.830 } 00:18:31.830 } 00:18:31.830 ]' 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.830 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.090 14:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:32.659 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.660 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.660 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.660 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.660 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.920 14:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.489 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.749 { 00:18:33.749 "cntlid": 47, 00:18:33.749 "qid": 0, 00:18:33.749 "state": "enabled", 00:18:33.749 "listen_address": { 00:18:33.749 "trtype": "TCP", 00:18:33.749 "adrfam": "IPv4", 00:18:33.749 "traddr": "10.0.0.2", 00:18:33.749 "trsvcid": "4420" 00:18:33.749 }, 00:18:33.749 "peer_address": { 00:18:33.749 "trtype": "TCP", 00:18:33.749 "adrfam": "IPv4", 00:18:33.749 "traddr": "10.0.0.1", 00:18:33.749 "trsvcid": "59916" 00:18:33.749 }, 00:18:33.749 "auth": { 00:18:33.749 "state": "completed", 00:18:33.749 "digest": "sha256", 00:18:33.749 "dhgroup": "ffdhe8192" 00:18:33.749 } 00:18:33.749 } 00:18:33.749 ]' 00:18:33.749 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.009 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.270 14:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.840 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.100 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.359 00:18:35.359 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.359 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.359 14:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.620 { 00:18:35.620 "cntlid": 49, 00:18:35.620 "qid": 0, 00:18:35.620 "state": "enabled", 00:18:35.620 "listen_address": { 00:18:35.620 "trtype": "TCP", 00:18:35.620 "adrfam": "IPv4", 00:18:35.620 "traddr": "10.0.0.2", 00:18:35.620 "trsvcid": "4420" 00:18:35.620 }, 00:18:35.620 "peer_address": { 00:18:35.620 "trtype": "TCP", 00:18:35.620 "adrfam": "IPv4", 00:18:35.620 "traddr": "10.0.0.1", 00:18:35.620 "trsvcid": "59938" 00:18:35.620 }, 00:18:35.620 "auth": { 00:18:35.620 "state": "completed", 00:18:35.620 "digest": "sha384", 00:18:35.620 "dhgroup": "null" 00:18:35.620 } 00:18:35.620 } 00:18:35.620 ]' 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:35.620 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.879 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.879 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.879 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.880 14:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.821 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.081 00:18:37.081 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.081 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.081 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.341 { 00:18:37.341 "cntlid": 51, 00:18:37.341 "qid": 0, 00:18:37.341 "state": "enabled", 00:18:37.341 "listen_address": { 00:18:37.341 "trtype": "TCP", 00:18:37.341 "adrfam": "IPv4", 00:18:37.341 "traddr": "10.0.0.2", 00:18:37.341 "trsvcid": "4420" 00:18:37.341 }, 00:18:37.341 "peer_address": { 00:18:37.341 "trtype": "TCP", 00:18:37.341 "adrfam": "IPv4", 00:18:37.341 "traddr": "10.0.0.1", 00:18:37.341 "trsvcid": "59958" 00:18:37.341 }, 00:18:37.341 "auth": { 00:18:37.341 "state": "completed", 00:18:37.341 "digest": "sha384", 00:18:37.341 "dhgroup": "null" 00:18:37.341 } 00:18:37.341 } 00:18:37.341 ]' 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.341 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.602 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:37.602 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.602 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.602 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.602 14:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.863 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.435 14:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.696 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.956 00:18:38.956 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.956 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.956 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.217 { 00:18:39.217 "cntlid": 53, 00:18:39.217 "qid": 0, 00:18:39.217 "state": "enabled", 00:18:39.217 "listen_address": { 00:18:39.217 "trtype": "TCP", 00:18:39.217 "adrfam": "IPv4", 00:18:39.217 "traddr": "10.0.0.2", 00:18:39.217 "trsvcid": "4420" 00:18:39.217 }, 00:18:39.217 "peer_address": { 00:18:39.217 "trtype": "TCP", 00:18:39.217 "adrfam": "IPv4", 00:18:39.217 "traddr": "10.0.0.1", 00:18:39.217 "trsvcid": "59980" 00:18:39.217 }, 00:18:39.217 "auth": { 00:18:39.217 "state": "completed", 00:18:39.217 "digest": "sha384", 00:18:39.217 "dhgroup": "null" 00:18:39.217 } 00:18:39.217 } 00:18:39.217 ]' 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.217 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.478 14:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.421 14:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.422 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.422 14:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.681 00:18:40.681 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.681 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.681 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.941 { 00:18:40.941 "cntlid": 55, 00:18:40.941 "qid": 0, 00:18:40.941 "state": "enabled", 00:18:40.941 "listen_address": { 00:18:40.941 "trtype": "TCP", 00:18:40.941 "adrfam": "IPv4", 00:18:40.941 "traddr": "10.0.0.2", 00:18:40.941 "trsvcid": "4420" 00:18:40.941 }, 00:18:40.941 "peer_address": { 00:18:40.941 "trtype": "TCP", 00:18:40.941 "adrfam": "IPv4", 00:18:40.941 "traddr": "10.0.0.1", 00:18:40.941 "trsvcid": "60006" 00:18:40.941 }, 00:18:40.941 "auth": { 00:18:40.941 "state": "completed", 00:18:40.941 "digest": "sha384", 00:18:40.941 "dhgroup": "null" 00:18:40.941 } 00:18:40.941 } 00:18:40.941 ]' 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:40.941 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.202 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.202 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.202 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.202 14:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.145 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.406 00:18:42.406 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.406 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.406 14:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.666 { 00:18:42.666 "cntlid": 57, 00:18:42.666 "qid": 0, 00:18:42.666 "state": "enabled", 00:18:42.666 "listen_address": { 00:18:42.666 "trtype": "TCP", 00:18:42.666 "adrfam": "IPv4", 00:18:42.666 "traddr": "10.0.0.2", 00:18:42.666 "trsvcid": "4420" 00:18:42.666 }, 00:18:42.666 "peer_address": { 00:18:42.666 "trtype": "TCP", 00:18:42.666 "adrfam": "IPv4", 00:18:42.666 "traddr": "10.0.0.1", 00:18:42.666 "trsvcid": "60024" 00:18:42.666 }, 00:18:42.666 "auth": { 00:18:42.666 "state": "completed", 00:18:42.666 "digest": "sha384", 00:18:42.666 "dhgroup": "ffdhe2048" 00:18:42.666 } 00:18:42.666 } 00:18:42.666 ]' 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.666 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.925 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.925 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.925 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.925 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.925 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.186 14:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.757 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.017 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.279 00:18:44.279 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.279 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.279 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.539 { 00:18:44.539 "cntlid": 59, 00:18:44.539 "qid": 0, 00:18:44.539 "state": "enabled", 00:18:44.539 "listen_address": { 00:18:44.539 "trtype": "TCP", 00:18:44.539 "adrfam": "IPv4", 00:18:44.539 "traddr": "10.0.0.2", 00:18:44.539 "trsvcid": "4420" 00:18:44.539 }, 00:18:44.539 "peer_address": { 00:18:44.539 "trtype": "TCP", 00:18:44.539 "adrfam": "IPv4", 00:18:44.539 "traddr": "10.0.0.1", 00:18:44.539 "trsvcid": "39672" 00:18:44.539 }, 00:18:44.539 "auth": { 00:18:44.539 "state": "completed", 00:18:44.539 "digest": "sha384", 00:18:44.539 "dhgroup": "ffdhe2048" 00:18:44.539 } 00:18:44.539 } 00:18:44.539 ]' 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.539 14:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.539 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.539 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.540 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.540 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.540 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.800 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:45.739 14:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.739 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.000 00:18:46.000 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.000 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.000 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.260 { 00:18:46.260 "cntlid": 61, 00:18:46.260 "qid": 0, 00:18:46.260 "state": "enabled", 00:18:46.260 "listen_address": { 00:18:46.260 "trtype": "TCP", 00:18:46.260 "adrfam": "IPv4", 00:18:46.260 "traddr": "10.0.0.2", 00:18:46.260 "trsvcid": "4420" 00:18:46.260 }, 00:18:46.260 "peer_address": { 00:18:46.260 "trtype": "TCP", 00:18:46.260 "adrfam": "IPv4", 00:18:46.260 "traddr": "10.0.0.1", 00:18:46.260 "trsvcid": "39706" 00:18:46.260 }, 00:18:46.260 "auth": { 00:18:46.260 "state": "completed", 00:18:46.260 "digest": "sha384", 00:18:46.260 "dhgroup": "ffdhe2048" 00:18:46.260 } 00:18:46.260 } 00:18:46.260 ]' 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.260 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.520 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.520 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.520 14:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.520 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.459 14:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.459 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.719 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.980 { 00:18:47.980 "cntlid": 63, 00:18:47.980 "qid": 0, 00:18:47.980 "state": "enabled", 00:18:47.980 "listen_address": { 00:18:47.980 "trtype": "TCP", 00:18:47.980 "adrfam": "IPv4", 00:18:47.980 "traddr": "10.0.0.2", 00:18:47.980 "trsvcid": "4420" 00:18:47.980 }, 00:18:47.980 "peer_address": { 00:18:47.980 "trtype": "TCP", 00:18:47.980 "adrfam": "IPv4", 00:18:47.980 "traddr": "10.0.0.1", 00:18:47.980 "trsvcid": "39728" 00:18:47.980 }, 00:18:47.980 "auth": { 00:18:47.980 "state": "completed", 00:18:47.980 "digest": "sha384", 00:18:47.980 "dhgroup": "ffdhe2048" 00:18:47.980 } 00:18:47.980 } 00:18:47.980 ]' 00:18:47.980 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.240 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.499 14:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.067 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.327 14:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.587 00:18:49.587 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.587 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.587 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.848 { 00:18:49.848 "cntlid": 65, 00:18:49.848 "qid": 0, 00:18:49.848 "state": "enabled", 00:18:49.848 "listen_address": { 00:18:49.848 "trtype": "TCP", 00:18:49.848 "adrfam": "IPv4", 00:18:49.848 "traddr": "10.0.0.2", 00:18:49.848 "trsvcid": "4420" 00:18:49.848 }, 00:18:49.848 "peer_address": { 00:18:49.848 "trtype": "TCP", 00:18:49.848 "adrfam": "IPv4", 00:18:49.848 "traddr": "10.0.0.1", 00:18:49.848 "trsvcid": "39742" 00:18:49.848 }, 00:18:49.848 "auth": { 00:18:49.848 "state": "completed", 00:18:49.848 "digest": "sha384", 00:18:49.848 "dhgroup": "ffdhe3072" 00:18:49.848 } 00:18:49.848 } 00:18:49.848 ]' 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.848 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.849 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.108 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.108 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.108 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.108 14:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.089 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.349 00:18:51.349 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.349 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.349 14:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.610 { 00:18:51.610 "cntlid": 67, 00:18:51.610 "qid": 0, 00:18:51.610 "state": "enabled", 00:18:51.610 "listen_address": { 00:18:51.610 "trtype": "TCP", 00:18:51.610 "adrfam": "IPv4", 00:18:51.610 "traddr": "10.0.0.2", 00:18:51.610 "trsvcid": "4420" 00:18:51.610 }, 00:18:51.610 "peer_address": { 00:18:51.610 "trtype": "TCP", 00:18:51.610 "adrfam": "IPv4", 00:18:51.610 "traddr": "10.0.0.1", 00:18:51.610 "trsvcid": "39788" 00:18:51.610 }, 00:18:51.610 "auth": { 00:18:51.610 "state": "completed", 00:18:51.610 "digest": "sha384", 00:18:51.610 "dhgroup": "ffdhe3072" 00:18:51.610 } 00:18:51.610 } 00:18:51.610 ]' 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.610 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.870 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:51.870 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.870 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.870 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.870 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.131 14:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.701 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.961 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.221 00:18:53.221 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.221 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.221 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.482 { 00:18:53.482 "cntlid": 69, 00:18:53.482 "qid": 0, 00:18:53.482 "state": "enabled", 00:18:53.482 "listen_address": { 00:18:53.482 "trtype": "TCP", 00:18:53.482 "adrfam": "IPv4", 00:18:53.482 "traddr": "10.0.0.2", 00:18:53.482 "trsvcid": "4420" 00:18:53.482 }, 00:18:53.482 "peer_address": { 00:18:53.482 "trtype": "TCP", 00:18:53.482 "adrfam": "IPv4", 00:18:53.482 "traddr": "10.0.0.1", 00:18:53.482 "trsvcid": "53872" 00:18:53.482 }, 00:18:53.482 "auth": { 00:18:53.482 "state": "completed", 00:18:53.482 "digest": "sha384", 00:18:53.482 "dhgroup": "ffdhe3072" 00:18:53.482 } 00:18:53.482 } 00:18:53.482 ]' 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.482 14:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.482 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.482 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.742 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.742 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.742 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.742 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.724 14:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.724 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.983 00:18:54.983 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.983 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.983 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.243 { 00:18:55.243 "cntlid": 71, 00:18:55.243 "qid": 0, 00:18:55.243 "state": "enabled", 00:18:55.243 "listen_address": { 00:18:55.243 "trtype": "TCP", 00:18:55.243 "adrfam": "IPv4", 00:18:55.243 "traddr": "10.0.0.2", 00:18:55.243 "trsvcid": "4420" 00:18:55.243 }, 00:18:55.243 "peer_address": { 00:18:55.243 "trtype": "TCP", 00:18:55.243 "adrfam": "IPv4", 00:18:55.243 "traddr": "10.0.0.1", 00:18:55.243 "trsvcid": "53898" 00:18:55.243 }, 00:18:55.243 "auth": { 00:18:55.243 "state": "completed", 00:18:55.243 "digest": "sha384", 00:18:55.243 "dhgroup": "ffdhe3072" 00:18:55.243 } 00:18:55.243 } 00:18:55.243 ]' 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.243 14:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.504 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:18:56.444 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.445 14:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.705 00:18:56.705 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.705 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.705 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.966 { 00:18:56.966 "cntlid": 73, 00:18:56.966 "qid": 0, 00:18:56.966 "state": "enabled", 00:18:56.966 "listen_address": { 00:18:56.966 "trtype": "TCP", 00:18:56.966 "adrfam": "IPv4", 00:18:56.966 "traddr": "10.0.0.2", 00:18:56.966 "trsvcid": "4420" 00:18:56.966 }, 00:18:56.966 "peer_address": { 00:18:56.966 "trtype": "TCP", 00:18:56.966 "adrfam": "IPv4", 00:18:56.966 "traddr": "10.0.0.1", 00:18:56.966 "trsvcid": "53914" 00:18:56.966 }, 00:18:56.966 "auth": { 00:18:56.966 "state": "completed", 00:18:56.966 "digest": "sha384", 00:18:56.966 "dhgroup": "ffdhe4096" 00:18:56.966 } 00:18:56.966 } 00:18:56.966 ]' 00:18:56.966 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.227 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.489 14:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:58.060 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.322 14:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.583 00:18:58.583 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.583 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.583 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.843 { 00:18:58.843 "cntlid": 75, 00:18:58.843 "qid": 0, 00:18:58.843 "state": "enabled", 00:18:58.843 "listen_address": { 00:18:58.843 "trtype": "TCP", 00:18:58.843 "adrfam": "IPv4", 00:18:58.843 "traddr": "10.0.0.2", 00:18:58.843 "trsvcid": "4420" 00:18:58.843 }, 00:18:58.843 "peer_address": { 00:18:58.843 "trtype": "TCP", 00:18:58.843 "adrfam": "IPv4", 00:18:58.843 "traddr": "10.0.0.1", 00:18:58.843 "trsvcid": "53954" 00:18:58.843 }, 00:18:58.843 "auth": { 00:18:58.843 "state": "completed", 00:18:58.843 "digest": "sha384", 00:18:58.843 "dhgroup": "ffdhe4096" 00:18:58.843 } 00:18:58.843 } 00:18:58.843 ]' 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.843 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.104 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.104 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.104 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.104 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.104 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.364 14:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.934 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.195 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.456 00:19:00.456 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.456 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.456 14:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.717 { 00:19:00.717 "cntlid": 77, 00:19:00.717 "qid": 0, 00:19:00.717 "state": "enabled", 00:19:00.717 "listen_address": { 00:19:00.717 "trtype": "TCP", 00:19:00.717 "adrfam": "IPv4", 00:19:00.717 "traddr": "10.0.0.2", 00:19:00.717 "trsvcid": "4420" 00:19:00.717 }, 00:19:00.717 "peer_address": { 00:19:00.717 "trtype": "TCP", 00:19:00.717 "adrfam": "IPv4", 00:19:00.717 "traddr": "10.0.0.1", 00:19:00.717 "trsvcid": "53988" 00:19:00.717 }, 00:19:00.717 "auth": { 00:19:00.717 "state": "completed", 00:19:00.717 "digest": "sha384", 00:19:00.717 "dhgroup": "ffdhe4096" 00:19:00.717 } 00:19:00.717 } 00:19:00.717 ]' 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.717 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.977 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.977 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.977 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.977 14:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:01.547 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.547 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.547 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.548 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.548 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.548 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.548 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:01.548 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.807 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.067 00:19:02.067 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.067 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.067 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.328 { 00:19:02.328 "cntlid": 79, 00:19:02.328 "qid": 0, 00:19:02.328 "state": "enabled", 00:19:02.328 "listen_address": { 00:19:02.328 "trtype": "TCP", 00:19:02.328 "adrfam": "IPv4", 00:19:02.328 "traddr": "10.0.0.2", 00:19:02.328 "trsvcid": "4420" 00:19:02.328 }, 00:19:02.328 "peer_address": { 00:19:02.328 "trtype": "TCP", 00:19:02.328 "adrfam": "IPv4", 00:19:02.328 "traddr": "10.0.0.1", 00:19:02.328 "trsvcid": "54010" 00:19:02.328 }, 00:19:02.328 "auth": { 00:19:02.328 "state": "completed", 00:19:02.328 "digest": "sha384", 00:19:02.328 "dhgroup": "ffdhe4096" 00:19:02.328 } 00:19:02.328 } 00:19:02.328 ]' 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.328 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.589 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.589 14:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.589 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.589 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.589 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.850 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:03.421 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.422 14:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.682 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:03.682 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.682 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.682 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:03.682 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.683 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.944 00:19:03.944 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.944 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.944 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.205 { 00:19:04.205 "cntlid": 81, 00:19:04.205 "qid": 0, 00:19:04.205 "state": "enabled", 00:19:04.205 "listen_address": { 00:19:04.205 "trtype": "TCP", 00:19:04.205 "adrfam": "IPv4", 00:19:04.205 "traddr": "10.0.0.2", 00:19:04.205 "trsvcid": "4420" 00:19:04.205 }, 00:19:04.205 "peer_address": { 00:19:04.205 "trtype": "TCP", 00:19:04.205 "adrfam": "IPv4", 00:19:04.205 "traddr": "10.0.0.1", 00:19:04.205 "trsvcid": "47488" 00:19:04.205 }, 00:19:04.205 "auth": { 00:19:04.205 "state": "completed", 00:19:04.205 "digest": "sha384", 00:19:04.205 "dhgroup": "ffdhe6144" 00:19:04.205 } 00:19:04.205 } 00:19:04.205 ]' 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.205 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.467 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.467 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.467 14:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.467 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:05.045 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.046 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.046 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.046 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.306 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.307 14:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.876 00:19:05.876 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.876 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.876 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.137 { 00:19:06.137 "cntlid": 83, 00:19:06.137 "qid": 0, 00:19:06.137 "state": "enabled", 00:19:06.137 "listen_address": { 00:19:06.137 "trtype": "TCP", 00:19:06.137 "adrfam": "IPv4", 00:19:06.137 "traddr": "10.0.0.2", 00:19:06.137 "trsvcid": "4420" 00:19:06.137 }, 00:19:06.137 "peer_address": { 00:19:06.137 "trtype": "TCP", 00:19:06.137 "adrfam": "IPv4", 00:19:06.137 "traddr": "10.0.0.1", 00:19:06.137 "trsvcid": "47512" 00:19:06.137 }, 00:19:06.137 "auth": { 00:19:06.137 "state": "completed", 00:19:06.137 "digest": "sha384", 00:19:06.137 "dhgroup": "ffdhe6144" 00:19:06.137 } 00:19:06.137 } 00:19:06.137 ]' 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.137 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.397 14:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:06.967 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 14:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.228 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.228 14:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.798 00:19:07.798 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.798 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.798 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.058 { 00:19:08.058 "cntlid": 85, 00:19:08.058 "qid": 0, 00:19:08.058 "state": "enabled", 00:19:08.058 "listen_address": { 00:19:08.058 "trtype": "TCP", 00:19:08.058 "adrfam": "IPv4", 00:19:08.058 "traddr": "10.0.0.2", 00:19:08.058 "trsvcid": "4420" 00:19:08.058 }, 00:19:08.058 "peer_address": { 00:19:08.058 "trtype": "TCP", 00:19:08.058 "adrfam": "IPv4", 00:19:08.058 "traddr": "10.0.0.1", 00:19:08.058 "trsvcid": "47548" 00:19:08.058 }, 00:19:08.058 "auth": { 00:19:08.058 "state": "completed", 00:19:08.058 "digest": "sha384", 00:19:08.058 "dhgroup": "ffdhe6144" 00:19:08.058 } 00:19:08.058 } 00:19:08.058 ]' 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.058 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.319 14:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.888 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.148 14:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.717 00:19:09.717 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.717 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.717 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.717 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.028 { 00:19:10.028 "cntlid": 87, 00:19:10.028 "qid": 0, 00:19:10.028 "state": "enabled", 00:19:10.028 "listen_address": { 00:19:10.028 "trtype": "TCP", 00:19:10.028 "adrfam": "IPv4", 00:19:10.028 "traddr": "10.0.0.2", 00:19:10.028 "trsvcid": "4420" 00:19:10.028 }, 00:19:10.028 "peer_address": { 00:19:10.028 "trtype": "TCP", 00:19:10.028 "adrfam": "IPv4", 00:19:10.028 "traddr": "10.0.0.1", 00:19:10.028 "trsvcid": "47564" 00:19:10.028 }, 00:19:10.028 "auth": { 00:19:10.028 "state": "completed", 00:19:10.028 "digest": "sha384", 00:19:10.028 "dhgroup": "ffdhe6144" 00:19:10.028 } 00:19:10.028 } 00:19:10.028 ]' 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.028 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.290 14:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.860 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.120 14:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.690 00:19:11.690 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.690 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.690 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.950 { 00:19:11.950 "cntlid": 89, 00:19:11.950 "qid": 0, 00:19:11.950 "state": "enabled", 00:19:11.950 "listen_address": { 00:19:11.950 "trtype": "TCP", 00:19:11.950 "adrfam": "IPv4", 00:19:11.950 "traddr": "10.0.0.2", 00:19:11.950 "trsvcid": "4420" 00:19:11.950 }, 00:19:11.950 "peer_address": { 00:19:11.950 "trtype": "TCP", 00:19:11.950 "adrfam": "IPv4", 00:19:11.950 "traddr": "10.0.0.1", 00:19:11.950 "trsvcid": "47594" 00:19:11.950 }, 00:19:11.950 "auth": { 00:19:11.950 "state": "completed", 00:19:11.950 "digest": "sha384", 00:19:11.950 "dhgroup": "ffdhe8192" 00:19:11.950 } 00:19:11.950 } 00:19:11.950 ]' 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.950 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.210 14:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.150 14:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.720 00:19:13.720 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.720 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.720 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.980 { 00:19:13.980 "cntlid": 91, 00:19:13.980 "qid": 0, 00:19:13.980 "state": "enabled", 00:19:13.980 "listen_address": { 00:19:13.980 "trtype": "TCP", 00:19:13.980 "adrfam": "IPv4", 00:19:13.980 "traddr": "10.0.0.2", 00:19:13.980 "trsvcid": "4420" 00:19:13.980 }, 00:19:13.980 "peer_address": { 00:19:13.980 "trtype": "TCP", 00:19:13.980 "adrfam": "IPv4", 00:19:13.980 "traddr": "10.0.0.1", 00:19:13.980 "trsvcid": "59382" 00:19:13.980 }, 00:19:13.980 "auth": { 00:19:13.980 "state": "completed", 00:19:13.980 "digest": "sha384", 00:19:13.980 "dhgroup": "ffdhe8192" 00:19:13.980 } 00:19:13.980 } 00:19:13.980 ]' 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.980 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.241 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.241 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.241 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.241 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.241 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.502 14:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.071 14:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.011 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.011 { 00:19:16.011 "cntlid": 93, 00:19:16.011 "qid": 0, 00:19:16.011 "state": "enabled", 00:19:16.011 "listen_address": { 00:19:16.011 "trtype": "TCP", 00:19:16.011 "adrfam": "IPv4", 00:19:16.011 "traddr": "10.0.0.2", 00:19:16.011 "trsvcid": "4420" 00:19:16.011 }, 00:19:16.011 "peer_address": { 00:19:16.011 "trtype": "TCP", 00:19:16.011 "adrfam": "IPv4", 00:19:16.011 "traddr": "10.0.0.1", 00:19:16.011 "trsvcid": "59416" 00:19:16.011 }, 00:19:16.011 "auth": { 00:19:16.011 "state": "completed", 00:19:16.011 "digest": "sha384", 00:19:16.011 "dhgroup": "ffdhe8192" 00:19:16.011 } 00:19:16.011 } 00:19:16.011 ]' 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.011 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.271 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.271 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.271 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.271 14:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.841 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.101 14:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.672 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.933 { 00:19:17.933 "cntlid": 95, 00:19:17.933 "qid": 0, 00:19:17.933 "state": "enabled", 00:19:17.933 "listen_address": { 00:19:17.933 "trtype": "TCP", 00:19:17.933 "adrfam": "IPv4", 00:19:17.933 "traddr": "10.0.0.2", 00:19:17.933 "trsvcid": "4420" 00:19:17.933 }, 00:19:17.933 "peer_address": { 00:19:17.933 "trtype": "TCP", 00:19:17.933 "adrfam": "IPv4", 00:19:17.933 "traddr": "10.0.0.1", 00:19:17.933 "trsvcid": "59448" 00:19:17.933 }, 00:19:17.933 "auth": { 00:19:17.933 "state": "completed", 00:19:17.933 "digest": "sha384", 00:19:17.933 "dhgroup": "ffdhe8192" 00:19:17.933 } 00:19:17.933 } 00:19:17.933 ]' 00:19:17.933 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.194 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.454 14:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.025 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.285 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.547 00:19:19.547 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.547 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.547 14:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.807 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.807 { 00:19:19.807 "cntlid": 97, 00:19:19.807 "qid": 0, 00:19:19.807 "state": "enabled", 00:19:19.807 "listen_address": { 00:19:19.807 "trtype": "TCP", 00:19:19.807 "adrfam": "IPv4", 00:19:19.807 "traddr": "10.0.0.2", 00:19:19.807 "trsvcid": "4420" 00:19:19.807 }, 00:19:19.807 "peer_address": { 00:19:19.807 "trtype": "TCP", 00:19:19.807 "adrfam": "IPv4", 00:19:19.807 "traddr": "10.0.0.1", 00:19:19.807 "trsvcid": "59486" 00:19:19.807 }, 00:19:19.807 "auth": { 00:19:19.807 "state": "completed", 00:19:19.808 "digest": "sha512", 00:19:19.808 "dhgroup": "null" 00:19:19.808 } 00:19:19.808 } 00:19:19.808 ]' 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.808 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.068 14:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:20.640 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.902 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.163 00:19:21.163 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.163 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.163 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.424 { 00:19:21.424 "cntlid": 99, 00:19:21.424 "qid": 0, 00:19:21.424 "state": "enabled", 00:19:21.424 "listen_address": { 00:19:21.424 "trtype": "TCP", 00:19:21.424 "adrfam": "IPv4", 00:19:21.424 "traddr": "10.0.0.2", 00:19:21.424 "trsvcid": "4420" 00:19:21.424 }, 00:19:21.424 "peer_address": { 00:19:21.424 "trtype": "TCP", 00:19:21.424 "adrfam": "IPv4", 00:19:21.424 "traddr": "10.0.0.1", 00:19:21.424 "trsvcid": "59530" 00:19:21.424 }, 00:19:21.424 "auth": { 00:19:21.424 "state": "completed", 00:19:21.424 "digest": "sha512", 00:19:21.424 "dhgroup": "null" 00:19:21.424 } 00:19:21.424 } 00:19:21.424 ]' 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.424 14:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.685 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.257 14:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.518 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.779 00:19:22.779 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.779 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.779 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.040 { 00:19:23.040 "cntlid": 101, 00:19:23.040 "qid": 0, 00:19:23.040 "state": "enabled", 00:19:23.040 "listen_address": { 00:19:23.040 "trtype": "TCP", 00:19:23.040 "adrfam": "IPv4", 00:19:23.040 "traddr": "10.0.0.2", 00:19:23.040 "trsvcid": "4420" 00:19:23.040 }, 00:19:23.040 "peer_address": { 00:19:23.040 "trtype": "TCP", 00:19:23.040 "adrfam": "IPv4", 00:19:23.040 "traddr": "10.0.0.1", 00:19:23.040 "trsvcid": "59556" 00:19:23.040 }, 00:19:23.040 "auth": { 00:19:23.040 "state": "completed", 00:19:23.040 "digest": "sha512", 00:19:23.040 "dhgroup": "null" 00:19:23.040 } 00:19:23.040 } 00:19:23.040 ]' 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.040 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.302 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.302 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.302 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.302 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.302 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.563 14:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.135 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.396 14:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.660 00:19:24.660 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.660 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.660 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.920 { 00:19:24.920 "cntlid": 103, 00:19:24.920 "qid": 0, 00:19:24.920 "state": "enabled", 00:19:24.920 "listen_address": { 00:19:24.920 "trtype": "TCP", 00:19:24.920 "adrfam": "IPv4", 00:19:24.920 "traddr": "10.0.0.2", 00:19:24.920 "trsvcid": "4420" 00:19:24.920 }, 00:19:24.920 "peer_address": { 00:19:24.920 "trtype": "TCP", 00:19:24.920 "adrfam": "IPv4", 00:19:24.920 "traddr": "10.0.0.1", 00:19:24.920 "trsvcid": "33014" 00:19:24.920 }, 00:19:24.920 "auth": { 00:19:24.920 "state": "completed", 00:19:24.920 "digest": "sha512", 00:19:24.920 "dhgroup": "null" 00:19:24.920 } 00:19:24.920 } 00:19:24.920 ]' 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.920 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.181 14:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:26.124 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.125 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.385 00:19:26.385 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.385 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.385 14:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.647 { 00:19:26.647 "cntlid": 105, 00:19:26.647 "qid": 0, 00:19:26.647 "state": "enabled", 00:19:26.647 "listen_address": { 00:19:26.647 "trtype": "TCP", 00:19:26.647 "adrfam": "IPv4", 00:19:26.647 "traddr": "10.0.0.2", 00:19:26.647 "trsvcid": "4420" 00:19:26.647 }, 00:19:26.647 "peer_address": { 00:19:26.647 "trtype": "TCP", 00:19:26.647 "adrfam": "IPv4", 00:19:26.647 "traddr": "10.0.0.1", 00:19:26.647 "trsvcid": "33034" 00:19:26.647 }, 00:19:26.647 "auth": { 00:19:26.647 "state": "completed", 00:19:26.647 "digest": "sha512", 00:19:26.647 "dhgroup": "ffdhe2048" 00:19:26.647 } 00:19:26.647 } 00:19:26.647 ]' 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.647 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.907 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.907 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.907 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.907 14:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:27.478 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.739 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.999 00:19:27.999 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.999 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.999 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.260 { 00:19:28.260 "cntlid": 107, 00:19:28.260 "qid": 0, 00:19:28.260 "state": "enabled", 00:19:28.260 "listen_address": { 00:19:28.260 "trtype": "TCP", 00:19:28.260 "adrfam": "IPv4", 00:19:28.260 "traddr": "10.0.0.2", 00:19:28.260 "trsvcid": "4420" 00:19:28.260 }, 00:19:28.260 "peer_address": { 00:19:28.260 "trtype": "TCP", 00:19:28.260 "adrfam": "IPv4", 00:19:28.260 "traddr": "10.0.0.1", 00:19:28.260 "trsvcid": "33062" 00:19:28.260 }, 00:19:28.260 "auth": { 00:19:28.260 "state": "completed", 00:19:28.260 "digest": "sha512", 00:19:28.260 "dhgroup": "ffdhe2048" 00:19:28.260 } 00:19:28.260 } 00:19:28.260 ]' 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.260 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.520 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.520 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.520 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.520 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.520 14:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.780 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.356 14:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.663 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.970 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.970 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.233 { 00:19:30.233 "cntlid": 109, 00:19:30.233 "qid": 0, 00:19:30.233 "state": "enabled", 00:19:30.233 "listen_address": { 00:19:30.233 "trtype": "TCP", 00:19:30.233 "adrfam": "IPv4", 00:19:30.233 "traddr": "10.0.0.2", 00:19:30.233 "trsvcid": "4420" 00:19:30.233 }, 00:19:30.233 "peer_address": { 00:19:30.233 "trtype": "TCP", 00:19:30.233 "adrfam": "IPv4", 00:19:30.233 "traddr": "10.0.0.1", 00:19:30.233 "trsvcid": "33098" 00:19:30.233 }, 00:19:30.233 "auth": { 00:19:30.233 "state": "completed", 00:19:30.233 "digest": "sha512", 00:19:30.233 "dhgroup": "ffdhe2048" 00:19:30.233 } 00:19:30.233 } 00:19:30.233 ]' 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.233 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.494 14:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.066 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.327 14:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.588 00:19:31.588 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.588 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.588 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.849 { 00:19:31.849 "cntlid": 111, 00:19:31.849 "qid": 0, 00:19:31.849 "state": "enabled", 00:19:31.849 "listen_address": { 00:19:31.849 "trtype": "TCP", 00:19:31.849 "adrfam": "IPv4", 00:19:31.849 "traddr": "10.0.0.2", 00:19:31.849 "trsvcid": "4420" 00:19:31.849 }, 00:19:31.849 "peer_address": { 00:19:31.849 "trtype": "TCP", 00:19:31.849 "adrfam": "IPv4", 00:19:31.849 "traddr": "10.0.0.1", 00:19:31.849 "trsvcid": "33126" 00:19:31.849 }, 00:19:31.849 "auth": { 00:19:31.849 "state": "completed", 00:19:31.849 "digest": "sha512", 00:19:31.849 "dhgroup": "ffdhe2048" 00:19:31.849 } 00:19:31.849 } 00:19:31.849 ]' 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.849 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.109 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.109 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.109 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.109 14:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.051 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.312 00:19:33.312 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.312 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.312 14:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.573 { 00:19:33.573 "cntlid": 113, 00:19:33.573 "qid": 0, 00:19:33.573 "state": "enabled", 00:19:33.573 "listen_address": { 00:19:33.573 "trtype": "TCP", 00:19:33.573 "adrfam": "IPv4", 00:19:33.573 "traddr": "10.0.0.2", 00:19:33.573 "trsvcid": "4420" 00:19:33.573 }, 00:19:33.573 "peer_address": { 00:19:33.573 "trtype": "TCP", 00:19:33.573 "adrfam": "IPv4", 00:19:33.573 "traddr": "10.0.0.1", 00:19:33.573 "trsvcid": "48154" 00:19:33.573 }, 00:19:33.573 "auth": { 00:19:33.573 "state": "completed", 00:19:33.573 "digest": "sha512", 00:19:33.573 "dhgroup": "ffdhe3072" 00:19:33.573 } 00:19:33.573 } 00:19:33.573 ]' 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.573 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.833 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.833 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.833 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.833 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.404 14:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.663 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.923 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.182 { 00:19:35.182 "cntlid": 115, 00:19:35.182 "qid": 0, 00:19:35.182 "state": "enabled", 00:19:35.182 "listen_address": { 00:19:35.182 "trtype": "TCP", 00:19:35.182 "adrfam": "IPv4", 00:19:35.182 "traddr": "10.0.0.2", 00:19:35.182 "trsvcid": "4420" 00:19:35.182 }, 00:19:35.182 "peer_address": { 00:19:35.182 "trtype": "TCP", 00:19:35.182 "adrfam": "IPv4", 00:19:35.182 "traddr": "10.0.0.1", 00:19:35.182 "trsvcid": "48182" 00:19:35.182 }, 00:19:35.182 "auth": { 00:19:35.182 "state": "completed", 00:19:35.182 "digest": "sha512", 00:19:35.182 "dhgroup": "ffdhe3072" 00:19:35.182 } 00:19:35.182 } 00:19:35.182 ]' 00:19:35.182 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.442 14:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.702 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.272 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.533 14:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.793 00:19:36.793 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.793 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.793 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.053 { 00:19:37.053 "cntlid": 117, 00:19:37.053 "qid": 0, 00:19:37.053 "state": "enabled", 00:19:37.053 "listen_address": { 00:19:37.053 "trtype": "TCP", 00:19:37.053 "adrfam": "IPv4", 00:19:37.053 "traddr": "10.0.0.2", 00:19:37.053 "trsvcid": "4420" 00:19:37.053 }, 00:19:37.053 "peer_address": { 00:19:37.053 "trtype": "TCP", 00:19:37.053 "adrfam": "IPv4", 00:19:37.053 "traddr": "10.0.0.1", 00:19:37.053 "trsvcid": "48210" 00:19:37.053 }, 00:19:37.053 "auth": { 00:19:37.053 "state": "completed", 00:19:37.053 "digest": "sha512", 00:19:37.053 "dhgroup": "ffdhe3072" 00:19:37.053 } 00:19:37.053 } 00:19:37.053 ]' 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.053 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.313 14:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.884 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.145 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.406 00:19:38.406 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.406 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.406 14:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.666 { 00:19:38.666 "cntlid": 119, 00:19:38.666 "qid": 0, 00:19:38.666 "state": "enabled", 00:19:38.666 "listen_address": { 00:19:38.666 "trtype": "TCP", 00:19:38.666 "adrfam": "IPv4", 00:19:38.666 "traddr": "10.0.0.2", 00:19:38.666 "trsvcid": "4420" 00:19:38.666 }, 00:19:38.666 "peer_address": { 00:19:38.666 "trtype": "TCP", 00:19:38.666 "adrfam": "IPv4", 00:19:38.666 "traddr": "10.0.0.1", 00:19:38.666 "trsvcid": "48226" 00:19:38.666 }, 00:19:38.666 "auth": { 00:19:38.666 "state": "completed", 00:19:38.666 "digest": "sha512", 00:19:38.666 "dhgroup": "ffdhe3072" 00:19:38.666 } 00:19:38.666 } 00:19:38.666 ]' 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.666 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.927 14:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:39.498 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.499 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.759 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.020 00:19:40.020 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.020 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.020 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.281 { 00:19:40.281 "cntlid": 121, 00:19:40.281 "qid": 0, 00:19:40.281 "state": "enabled", 00:19:40.281 "listen_address": { 00:19:40.281 "trtype": "TCP", 00:19:40.281 "adrfam": "IPv4", 00:19:40.281 "traddr": "10.0.0.2", 00:19:40.281 "trsvcid": "4420" 00:19:40.281 }, 00:19:40.281 "peer_address": { 00:19:40.281 "trtype": "TCP", 00:19:40.281 "adrfam": "IPv4", 00:19:40.281 "traddr": "10.0.0.1", 00:19:40.281 "trsvcid": "48256" 00:19:40.281 }, 00:19:40.281 "auth": { 00:19:40.281 "state": "completed", 00:19:40.281 "digest": "sha512", 00:19:40.281 "dhgroup": "ffdhe4096" 00:19:40.281 } 00:19:40.281 } 00:19:40.281 ]' 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.281 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.543 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.543 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.543 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.543 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.543 14:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.804 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.374 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.635 14:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.896 00:19:41.896 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.896 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.896 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.157 { 00:19:42.157 "cntlid": 123, 00:19:42.157 "qid": 0, 00:19:42.157 "state": "enabled", 00:19:42.157 "listen_address": { 00:19:42.157 "trtype": "TCP", 00:19:42.157 "adrfam": "IPv4", 00:19:42.157 "traddr": "10.0.0.2", 00:19:42.157 "trsvcid": "4420" 00:19:42.157 }, 00:19:42.157 "peer_address": { 00:19:42.157 "trtype": "TCP", 00:19:42.157 "adrfam": "IPv4", 00:19:42.157 "traddr": "10.0.0.1", 00:19:42.157 "trsvcid": "48286" 00:19:42.157 }, 00:19:42.157 "auth": { 00:19:42.157 "state": "completed", 00:19:42.157 "digest": "sha512", 00:19:42.157 "dhgroup": "ffdhe4096" 00:19:42.157 } 00:19:42.157 } 00:19:42.157 ]' 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.157 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.418 14:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:42.990 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.990 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.990 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.990 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.250 14:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.822 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.822 { 00:19:43.822 "cntlid": 125, 00:19:43.822 "qid": 0, 00:19:43.822 "state": "enabled", 00:19:43.822 "listen_address": { 00:19:43.822 "trtype": "TCP", 00:19:43.822 "adrfam": "IPv4", 00:19:43.822 "traddr": "10.0.0.2", 00:19:43.822 "trsvcid": "4420" 00:19:43.822 }, 00:19:43.822 "peer_address": { 00:19:43.822 "trtype": "TCP", 00:19:43.822 "adrfam": "IPv4", 00:19:43.822 "traddr": "10.0.0.1", 00:19:43.822 "trsvcid": "47092" 00:19:43.822 }, 00:19:43.822 "auth": { 00:19:43.822 "state": "completed", 00:19:43.822 "digest": "sha512", 00:19:43.822 "dhgroup": "ffdhe4096" 00:19:43.822 } 00:19:43.822 } 00:19:43.822 ]' 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.822 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.084 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.084 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.084 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.084 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.084 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.344 14:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.914 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.174 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.433 00:19:45.433 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.433 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.433 14:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.693 { 00:19:45.693 "cntlid": 127, 00:19:45.693 "qid": 0, 00:19:45.693 "state": "enabled", 00:19:45.693 "listen_address": { 00:19:45.693 "trtype": "TCP", 00:19:45.693 "adrfam": "IPv4", 00:19:45.693 "traddr": "10.0.0.2", 00:19:45.693 "trsvcid": "4420" 00:19:45.693 }, 00:19:45.693 "peer_address": { 00:19:45.693 "trtype": "TCP", 00:19:45.693 "adrfam": "IPv4", 00:19:45.693 "traddr": "10.0.0.1", 00:19:45.693 "trsvcid": "47118" 00:19:45.693 }, 00:19:45.693 "auth": { 00:19:45.693 "state": "completed", 00:19:45.693 "digest": "sha512", 00:19:45.693 "dhgroup": "ffdhe4096" 00:19:45.693 } 00:19:45.693 } 00:19:45.693 ]' 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.693 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.953 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:46.522 14:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.522 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.783 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.043 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.303 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.303 { 00:19:47.304 "cntlid": 129, 00:19:47.304 "qid": 0, 00:19:47.304 "state": "enabled", 00:19:47.304 "listen_address": { 00:19:47.304 "trtype": "TCP", 00:19:47.304 "adrfam": "IPv4", 00:19:47.304 "traddr": "10.0.0.2", 00:19:47.304 "trsvcid": "4420" 00:19:47.304 }, 00:19:47.304 "peer_address": { 00:19:47.304 "trtype": "TCP", 00:19:47.304 "adrfam": "IPv4", 00:19:47.304 "traddr": "10.0.0.1", 00:19:47.304 "trsvcid": "47160" 00:19:47.304 }, 00:19:47.304 "auth": { 00:19:47.304 "state": "completed", 00:19:47.304 "digest": "sha512", 00:19:47.304 "dhgroup": "ffdhe6144" 00:19:47.304 } 00:19:47.304 } 00:19:47.304 ]' 00:19:47.304 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.564 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.564 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.564 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.564 14:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.564 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.564 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.564 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.823 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.393 14:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.681 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.253 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.254 { 00:19:49.254 "cntlid": 131, 00:19:49.254 "qid": 0, 00:19:49.254 "state": "enabled", 00:19:49.254 "listen_address": { 00:19:49.254 "trtype": "TCP", 00:19:49.254 "adrfam": "IPv4", 00:19:49.254 "traddr": "10.0.0.2", 00:19:49.254 "trsvcid": "4420" 00:19:49.254 }, 00:19:49.254 "peer_address": { 00:19:49.254 "trtype": "TCP", 00:19:49.254 "adrfam": "IPv4", 00:19:49.254 "traddr": "10.0.0.1", 00:19:49.254 "trsvcid": "47182" 00:19:49.254 }, 00:19:49.254 "auth": { 00:19:49.254 "state": "completed", 00:19:49.254 "digest": "sha512", 00:19:49.254 "dhgroup": "ffdhe6144" 00:19:49.254 } 00:19:49.254 } 00:19:49.254 ]' 00:19:49.254 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.514 14:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.773 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:50.342 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.343 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.603 14:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.863 00:19:50.863 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.863 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.863 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.123 { 00:19:51.123 "cntlid": 133, 00:19:51.123 "qid": 0, 00:19:51.123 "state": "enabled", 00:19:51.123 "listen_address": { 00:19:51.123 "trtype": "TCP", 00:19:51.123 "adrfam": "IPv4", 00:19:51.123 "traddr": "10.0.0.2", 00:19:51.123 "trsvcid": "4420" 00:19:51.123 }, 00:19:51.123 "peer_address": { 00:19:51.123 "trtype": "TCP", 00:19:51.123 "adrfam": "IPv4", 00:19:51.123 "traddr": "10.0.0.1", 00:19:51.123 "trsvcid": "47210" 00:19:51.123 }, 00:19:51.123 "auth": { 00:19:51.123 "state": "completed", 00:19:51.123 "digest": "sha512", 00:19:51.123 "dhgroup": "ffdhe6144" 00:19:51.123 } 00:19:51.123 } 00:19:51.123 ]' 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.123 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.383 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.383 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.383 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.383 14:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.326 14:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.897 00:19:52.897 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.897 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.897 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.158 { 00:19:53.158 "cntlid": 135, 00:19:53.158 "qid": 0, 00:19:53.158 "state": "enabled", 00:19:53.158 "listen_address": { 00:19:53.158 "trtype": "TCP", 00:19:53.158 "adrfam": "IPv4", 00:19:53.158 "traddr": "10.0.0.2", 00:19:53.158 "trsvcid": "4420" 00:19:53.158 }, 00:19:53.158 "peer_address": { 00:19:53.158 "trtype": "TCP", 00:19:53.158 "adrfam": "IPv4", 00:19:53.158 "traddr": "10.0.0.1", 00:19:53.158 "trsvcid": "47234" 00:19:53.158 }, 00:19:53.158 "auth": { 00:19:53.158 "state": "completed", 00:19:53.158 "digest": "sha512", 00:19:53.158 "dhgroup": "ffdhe6144" 00:19:53.158 } 00:19:53.158 } 00:19:53.158 ]' 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.158 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.420 14:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:19:54.362 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.363 14:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.935 00:19:54.935 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.935 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.935 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.194 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.194 { 00:19:55.194 "cntlid": 137, 00:19:55.194 "qid": 0, 00:19:55.194 "state": "enabled", 00:19:55.194 "listen_address": { 00:19:55.194 "trtype": "TCP", 00:19:55.194 "adrfam": "IPv4", 00:19:55.194 "traddr": "10.0.0.2", 00:19:55.194 "trsvcid": "4420" 00:19:55.194 }, 00:19:55.194 "peer_address": { 00:19:55.194 "trtype": "TCP", 00:19:55.194 "adrfam": "IPv4", 00:19:55.194 "traddr": "10.0.0.1", 00:19:55.195 "trsvcid": "36502" 00:19:55.195 }, 00:19:55.195 "auth": { 00:19:55.195 "state": "completed", 00:19:55.195 "digest": "sha512", 00:19:55.195 "dhgroup": "ffdhe8192" 00:19:55.195 } 00:19:55.195 } 00:19:55.195 ]' 00:19:55.195 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.195 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.195 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.195 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.195 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.455 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.455 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.455 14:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.455 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.397 14:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.339 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.339 { 00:19:57.339 "cntlid": 139, 00:19:57.339 "qid": 0, 00:19:57.339 "state": "enabled", 00:19:57.339 "listen_address": { 00:19:57.339 "trtype": "TCP", 00:19:57.339 "adrfam": "IPv4", 00:19:57.339 "traddr": "10.0.0.2", 00:19:57.339 "trsvcid": "4420" 00:19:57.339 }, 00:19:57.339 "peer_address": { 00:19:57.339 "trtype": "TCP", 00:19:57.339 "adrfam": "IPv4", 00:19:57.339 "traddr": "10.0.0.1", 00:19:57.339 "trsvcid": "36540" 00:19:57.339 }, 00:19:57.339 "auth": { 00:19:57.339 "state": "completed", 00:19:57.339 "digest": "sha512", 00:19:57.339 "dhgroup": "ffdhe8192" 00:19:57.339 } 00:19:57.339 } 00:19:57.339 ]' 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.339 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.600 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.600 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.601 14:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.601 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZTlhZDYyMGM4MGYzNDBiYzMyMDk1ZDYyYmVkOGUyOWVDMkwz: --dhchap-ctrl-secret DHHC-1:02:ODcyODRhYmM2YmQyYjZjMTYyMTg3MGU4Y2NkNzYzMTVjNzJkNjFlNTc4ZDAwYjhiZDdnXA==: 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.543 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.543 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.544 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.544 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.544 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.116 00:19:59.377 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.377 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.377 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.378 { 00:19:59.378 "cntlid": 141, 00:19:59.378 "qid": 0, 00:19:59.378 "state": "enabled", 00:19:59.378 "listen_address": { 00:19:59.378 "trtype": "TCP", 00:19:59.378 "adrfam": "IPv4", 00:19:59.378 "traddr": "10.0.0.2", 00:19:59.378 "trsvcid": "4420" 00:19:59.378 }, 00:19:59.378 "peer_address": { 00:19:59.378 "trtype": "TCP", 00:19:59.378 "adrfam": "IPv4", 00:19:59.378 "traddr": "10.0.0.1", 00:19:59.378 "trsvcid": "36568" 00:19:59.378 }, 00:19:59.378 "auth": { 00:19:59.378 "state": "completed", 00:19:59.378 "digest": "sha512", 00:19:59.378 "dhgroup": "ffdhe8192" 00:19:59.378 } 00:19:59.378 } 00:19:59.378 ]' 00:19:59.378 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.638 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.638 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.638 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.638 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.638 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.638 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.638 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.899 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZTIyNWI4ZGYyZTE4MGNlZDUwZDBmNjQwZGM2NTA2ZWJjYjNkZTU3YmVlYWUzNzI3p06DFg==: --dhchap-ctrl-secret DHHC-1:01:NDc1ODg5OWIwMDM2NDg3ZDQyYmExYjc2OThlNjkzNWUTzOUI: 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.470 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.730 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.300 00:20:01.300 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.300 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.300 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.560 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.560 { 00:20:01.560 "cntlid": 143, 00:20:01.560 "qid": 0, 00:20:01.560 "state": "enabled", 00:20:01.560 "listen_address": { 00:20:01.560 "trtype": "TCP", 00:20:01.560 "adrfam": "IPv4", 00:20:01.560 "traddr": "10.0.0.2", 00:20:01.560 "trsvcid": "4420" 00:20:01.560 }, 00:20:01.560 "peer_address": { 00:20:01.560 "trtype": "TCP", 00:20:01.560 "adrfam": "IPv4", 00:20:01.560 "traddr": "10.0.0.1", 00:20:01.560 "trsvcid": "36608" 00:20:01.561 }, 00:20:01.561 "auth": { 00:20:01.561 "state": "completed", 00:20:01.561 "digest": "sha512", 00:20:01.561 "dhgroup": "ffdhe8192" 00:20:01.561 } 00:20:01.561 } 00:20:01.561 ]' 00:20:01.561 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.561 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.561 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.561 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.561 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.820 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.820 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.820 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.820 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.762 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.705 00:20:03.705 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.705 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.705 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.705 { 00:20:03.705 "cntlid": 145, 00:20:03.705 "qid": 0, 00:20:03.705 "state": "enabled", 00:20:03.705 "listen_address": { 00:20:03.705 "trtype": "TCP", 00:20:03.705 "adrfam": "IPv4", 00:20:03.705 "traddr": "10.0.0.2", 00:20:03.705 "trsvcid": "4420" 00:20:03.705 }, 00:20:03.705 "peer_address": { 00:20:03.705 "trtype": "TCP", 00:20:03.705 "adrfam": "IPv4", 00:20:03.705 "traddr": "10.0.0.1", 00:20:03.705 "trsvcid": "39956" 00:20:03.705 }, 00:20:03.705 "auth": { 00:20:03.705 "state": "completed", 00:20:03.705 "digest": "sha512", 00:20:03.705 "dhgroup": "ffdhe8192" 00:20:03.705 } 00:20:03.705 } 00:20:03.705 ]' 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.705 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.966 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.966 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.966 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.966 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YmYzMTk5OTgxNTI4MGZjOTQzYjJmYTlkMWNjOWFlMDljMGUyYmUwNzc1N2M0NWU1lRkPUQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY4ZmZjMGM2NDk3ODM1NTg0NzRmNTg3N2I1NDJmODczNDlhZGVjODMzYTZkNjhlNWRlYjI3MTEyNGJlYTBhYuxK0aU=: 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.909 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.482 request: 00:20:05.482 { 00:20:05.482 "name": "nvme0", 00:20:05.482 "trtype": "tcp", 00:20:05.482 "traddr": "10.0.0.2", 00:20:05.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.482 "adrfam": "ipv4", 00:20:05.482 "trsvcid": "4420", 00:20:05.482 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.482 "dhchap_key": "key2", 00:20:05.482 "method": "bdev_nvme_attach_controller", 00:20:05.482 "req_id": 1 00:20:05.482 } 00:20:05.482 Got JSON-RPC error response 00:20:05.482 response: 00:20:05.482 { 00:20:05.482 "code": -5, 00:20:05.482 "message": "Input/output error" 00:20:05.482 } 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.482 14:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.053 request: 00:20:06.053 { 00:20:06.053 "name": "nvme0", 00:20:06.053 "trtype": "tcp", 00:20:06.053 "traddr": "10.0.0.2", 00:20:06.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.053 "adrfam": "ipv4", 00:20:06.053 "trsvcid": "4420", 00:20:06.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.053 "dhchap_key": "key1", 00:20:06.053 "dhchap_ctrlr_key": "ckey2", 00:20:06.053 "method": "bdev_nvme_attach_controller", 00:20:06.053 "req_id": 1 00:20:06.053 } 00:20:06.053 Got JSON-RPC error response 00:20:06.053 response: 00:20:06.053 { 00:20:06.053 "code": -5, 00:20:06.053 "message": "Input/output error" 00:20:06.053 } 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.053 14:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.626 request: 00:20:06.626 { 00:20:06.626 "name": "nvme0", 00:20:06.626 "trtype": "tcp", 00:20:06.626 "traddr": "10.0.0.2", 00:20:06.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.626 "adrfam": "ipv4", 00:20:06.626 "trsvcid": "4420", 00:20:06.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.626 "dhchap_key": "key1", 00:20:06.626 "dhchap_ctrlr_key": "ckey1", 00:20:06.626 "method": "bdev_nvme_attach_controller", 00:20:06.626 "req_id": 1 00:20:06.626 } 00:20:06.626 Got JSON-RPC error response 00:20:06.626 response: 00:20:06.626 { 00:20:06.626 "code": -5, 00:20:06.626 "message": "Input/output error" 00:20:06.626 } 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3026548 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3026548 ']' 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3026548 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3026548 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3026548' 00:20:06.626 killing process with pid 3026548 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3026548 00:20:06.626 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3026548 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3055240 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3055240 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3055240 ']' 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:06.887 14:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3055240 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3055240 ']' 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:07.827 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.089 14:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.660 00:20:08.660 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.661 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.661 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.921 { 00:20:08.921 "cntlid": 1, 00:20:08.921 "qid": 0, 00:20:08.921 "state": "enabled", 00:20:08.921 "listen_address": { 00:20:08.921 "trtype": "TCP", 00:20:08.921 "adrfam": "IPv4", 00:20:08.921 "traddr": "10.0.0.2", 00:20:08.921 "trsvcid": "4420" 00:20:08.921 }, 00:20:08.921 "peer_address": { 00:20:08.921 "trtype": "TCP", 00:20:08.921 "adrfam": "IPv4", 00:20:08.921 "traddr": "10.0.0.1", 00:20:08.921 "trsvcid": "40026" 00:20:08.921 }, 00:20:08.921 "auth": { 00:20:08.921 "state": "completed", 00:20:08.921 "digest": "sha512", 00:20:08.921 "dhgroup": "ffdhe8192" 00:20:08.921 } 00:20:08.921 } 00:20:08.921 ]' 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.921 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.188 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.188 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.188 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.188 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:NTQ2NjdmNWYwNDEzZTk5NDZjZjRkOWY3ZTI5MDBmYzVkYWU1ZWExMDYyNmNmNmMyZTFkNGE0NDQ5NjI5MzQ5ZN5vXuU=: 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.203 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.465 request: 00:20:10.465 { 00:20:10.465 "name": "nvme0", 00:20:10.465 "trtype": "tcp", 00:20:10.465 "traddr": "10.0.0.2", 00:20:10.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.465 "adrfam": "ipv4", 00:20:10.465 "trsvcid": "4420", 00:20:10.465 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.465 "dhchap_key": "key3", 00:20:10.465 "method": "bdev_nvme_attach_controller", 00:20:10.465 "req_id": 1 00:20:10.465 } 00:20:10.465 Got JSON-RPC error response 00:20:10.465 response: 00:20:10.465 { 00:20:10.465 "code": -5, 00:20:10.465 "message": "Input/output error" 00:20:10.465 } 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.465 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.725 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.725 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:10.725 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.725 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:10.726 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.726 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:10.726 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.726 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.726 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.987 request: 00:20:10.987 { 00:20:10.987 "name": "nvme0", 00:20:10.987 "trtype": "tcp", 00:20:10.987 "traddr": "10.0.0.2", 00:20:10.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.987 "adrfam": "ipv4", 00:20:10.987 "trsvcid": "4420", 00:20:10.987 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.987 "dhchap_key": "key3", 00:20:10.987 "method": "bdev_nvme_attach_controller", 00:20:10.987 "req_id": 1 00:20:10.987 } 00:20:10.987 Got JSON-RPC error response 00:20:10.987 response: 00:20:10.987 { 00:20:10.987 "code": -5, 00:20:10.987 "message": "Input/output error" 00:20:10.987 } 00:20:10.987 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:10.987 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:10.987 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:10.987 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.988 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.249 request: 00:20:11.249 { 00:20:11.249 "name": "nvme0", 00:20:11.249 "trtype": "tcp", 00:20:11.249 "traddr": "10.0.0.2", 00:20:11.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.249 "adrfam": "ipv4", 00:20:11.249 "trsvcid": "4420", 00:20:11.249 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.249 "dhchap_key": "key0", 00:20:11.249 "dhchap_ctrlr_key": "key1", 00:20:11.249 "method": "bdev_nvme_attach_controller", 00:20:11.249 "req_id": 1 00:20:11.249 } 00:20:11.249 Got JSON-RPC error response 00:20:11.249 response: 00:20:11.249 { 00:20:11.249 "code": -5, 00:20:11.249 "message": "Input/output error" 00:20:11.249 } 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:11.249 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:11.509 00:20:11.509 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:11.509 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:11.509 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.770 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.770 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.770 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3026765 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3026765 ']' 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3026765 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3026765 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3026765' 00:20:12.031 killing process with pid 3026765 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3026765 00:20:12.031 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3026765 00:20:12.290 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.291 rmmod nvme_tcp 00:20:12.291 rmmod nvme_fabrics 00:20:12.291 rmmod nvme_keyring 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3055240 ']' 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3055240 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3055240 ']' 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3055240 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3055240 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3055240' 00:20:12.291 killing process with pid 3055240 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3055240 00:20:12.291 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3055240 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.551 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.466 14:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.466 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gom /tmp/spdk.key-sha256.OLU /tmp/spdk.key-sha384.Zh5 /tmp/spdk.key-sha512.XZJ /tmp/spdk.key-sha512.xmw /tmp/spdk.key-sha384.tcg /tmp/spdk.key-sha256.Q6r '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:14.466 00:20:14.466 real 2m34.667s 00:20:14.466 user 5m54.550s 00:20:14.466 sys 0m20.433s 00:20:14.466 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:14.466 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.466 ************************************ 00:20:14.466 END TEST nvmf_auth_target 00:20:14.466 ************************************ 00:20:14.742 14:28:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:14.742 14:28:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:14.742 14:28:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:14.742 14:28:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:14.742 14:28:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.742 ************************************ 00:20:14.742 START TEST nvmf_bdevio_no_huge 00:20:14.742 ************************************ 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:14.742 * Looking for test storage... 00:20:14.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.742 14:28:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:21.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:21.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:21.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.330 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:21.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.331 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.591 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.591 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.591 14:28:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:20:21.591 00:20:21.591 --- 10.0.0.2 ping statistics --- 00:20:21.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.591 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:20:21.591 00:20:21.591 --- 10.0.0.1 ping statistics --- 00:20:21.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.591 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3060460 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3060460 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 3060460 ']' 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.591 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:21.592 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.592 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:21.592 14:28:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.852 [2024-06-10 14:28:59.220887] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:20:21.852 [2024-06-10 14:28:59.220967] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:21.852 [2024-06-10 14:28:59.318257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.852 [2024-06-10 14:28:59.425519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.852 [2024-06-10 14:28:59.425572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.852 [2024-06-10 14:28:59.425580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.852 [2024-06-10 14:28:59.425587] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.852 [2024-06-10 14:28:59.425593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.852 [2024-06-10 14:28:59.425759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:20:21.852 [2024-06-10 14:28:59.425898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:20:21.852 [2024-06-10 14:28:59.426021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.852 [2024-06-10 14:28:59.426022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.795 [2024-06-10 14:29:00.160863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.795 Malloc0 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.795 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.796 [2024-06-10 14:29:00.202504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:22.796 { 00:20:22.796 "params": { 00:20:22.796 "name": "Nvme$subsystem", 00:20:22.796 "trtype": "$TEST_TRANSPORT", 00:20:22.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.796 "adrfam": "ipv4", 00:20:22.796 "trsvcid": "$NVMF_PORT", 00:20:22.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.796 "hdgst": ${hdgst:-false}, 00:20:22.796 "ddgst": ${ddgst:-false} 00:20:22.796 }, 00:20:22.796 "method": "bdev_nvme_attach_controller" 00:20:22.796 } 00:20:22.796 EOF 00:20:22.796 )") 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:22.796 14:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:22.796 "params": { 00:20:22.796 "name": "Nvme1", 00:20:22.796 "trtype": "tcp", 00:20:22.796 "traddr": "10.0.0.2", 00:20:22.796 "adrfam": "ipv4", 00:20:22.796 "trsvcid": "4420", 00:20:22.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.796 "hdgst": false, 00:20:22.796 "ddgst": false 00:20:22.796 }, 00:20:22.796 "method": "bdev_nvme_attach_controller" 00:20:22.796 }' 00:20:22.796 [2024-06-10 14:29:00.254892] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:20:22.796 [2024-06-10 14:29:00.254968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3060666 ] 00:20:22.796 [2024-06-10 14:29:00.342784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.056 [2024-06-10 14:29:00.450464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.056 [2024-06-10 14:29:00.450594] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.056 [2024-06-10 14:29:00.450597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.317 I/O targets: 00:20:23.317 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:23.317 00:20:23.317 00:20:23.317 CUnit - A unit testing framework for C - Version 2.1-3 00:20:23.317 http://cunit.sourceforge.net/ 00:20:23.317 00:20:23.317 00:20:23.317 Suite: bdevio tests on: Nvme1n1 00:20:23.317 Test: blockdev write read block ...passed 00:20:23.317 Test: blockdev write zeroes read block ...passed 00:20:23.317 Test: blockdev write zeroes read no split ...passed 00:20:23.578 Test: blockdev write zeroes read split ...passed 00:20:23.578 Test: blockdev write zeroes read split partial ...passed 00:20:23.578 Test: blockdev reset ...[2024-06-10 14:29:00.965493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.578 [2024-06-10 14:29:00.965549] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24beaf0 (9): Bad file descriptor 00:20:23.578 [2024-06-10 14:29:01.024243] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:23.578 passed 00:20:23.578 Test: blockdev write read 8 blocks ...passed 00:20:23.578 Test: blockdev write read size > 128k ...passed 00:20:23.578 Test: blockdev write read invalid size ...passed 00:20:23.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:23.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:23.578 Test: blockdev write read max offset ...passed 00:20:23.839 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:23.839 Test: blockdev writev readv 8 blocks ...passed 00:20:23.839 Test: blockdev writev readv 30 x 1block ...passed 00:20:23.839 Test: blockdev writev readv block ...passed 00:20:23.839 Test: blockdev writev readv size > 128k ...passed 00:20:23.839 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:23.839 Test: blockdev comparev and writev ...[2024-06-10 14:29:01.286429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.286453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.286463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.286469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.286967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.286975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.286984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.286989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.287458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.287466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.287475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.287481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.287922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.287934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.287943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.839 [2024-06-10 14:29:01.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:23.839 passed 00:20:23.839 Test: blockdev nvme passthru rw ...passed 00:20:23.839 Test: blockdev nvme passthru vendor specific ...[2024-06-10 14:29:01.372949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.839 [2024-06-10 14:29:01.372960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.373263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.839 [2024-06-10 14:29:01.373270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.373613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.839 [2024-06-10 14:29:01.373620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:23.839 [2024-06-10 14:29:01.373952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.839 [2024-06-10 14:29:01.373960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:23.839 passed 00:20:23.839 Test: blockdev nvme admin passthru ...passed 00:20:23.839 Test: blockdev copy ...passed 00:20:23.839 00:20:23.839 Run Summary: Type Total Ran Passed Failed Inactive 00:20:23.839 suites 1 1 n/a 0 0 00:20:23.839 tests 23 23 23 0 0 00:20:23.839 asserts 152 152 152 0 n/a 00:20:23.839 00:20:23.839 Elapsed time = 1.329 seconds 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.412 rmmod nvme_tcp 00:20:24.412 rmmod nvme_fabrics 00:20:24.412 rmmod nvme_keyring 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3060460 ']' 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3060460 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 3060460 ']' 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 3060460 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3060460 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3060460' 00:20:24.412 killing process with pid 3060460 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 3060460 00:20:24.412 14:29:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 3060460 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.672 14:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.217 14:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.217 00:20:27.217 real 0m12.078s 00:20:27.217 user 0m15.133s 00:20:27.217 sys 0m6.201s 00:20:27.217 14:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:27.218 14:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.218 ************************************ 00:20:27.218 END TEST nvmf_bdevio_no_huge 00:20:27.218 ************************************ 00:20:27.218 14:29:04 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:27.218 14:29:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:27.218 14:29:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:27.218 14:29:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.218 ************************************ 00:20:27.218 START TEST nvmf_tls 00:20:27.218 ************************************ 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:27.218 * Looking for test storage... 00:20:27.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.218 14:29:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.806 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.806 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.806 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.806 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:33.807 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:33.807 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:33.807 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:33.807 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:20:33.807 00:20:33.807 --- 10.0.0.2 ping statistics --- 00:20:33.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.807 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:20:33.807 00:20:33.807 --- 10.0.0.1 ping statistics --- 00:20:33.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.807 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3065159 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3065159 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3065159 ']' 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:33.807 14:29:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.807 [2024-06-10 14:29:11.385106] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:20:33.807 [2024-06-10 14:29:11.385153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.067 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.067 [2024-06-10 14:29:11.450955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.067 [2024-06-10 14:29:11.518411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.067 [2024-06-10 14:29:11.518448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.067 [2024-06-10 14:29:11.518455] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.067 [2024-06-10 14:29:11.518462] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.068 [2024-06-10 14:29:11.518468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.068 [2024-06-10 14:29:11.518491] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:34.638 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:34.897 true 00:20:34.897 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.897 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:35.156 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:35.156 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:35.156 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:35.416 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.416 14:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:35.676 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:35.676 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:35.676 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:35.676 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.676 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:35.936 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:35.936 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:35.936 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.936 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:36.197 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:36.197 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:36.197 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:36.457 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.457 14:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:36.457 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:36.457 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:36.457 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:36.717 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.717 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hxCXtfNHT1 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:36.978 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.poxRt7PD4c 00:20:36.979 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:36.979 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:36.979 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hxCXtfNHT1 00:20:36.979 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.poxRt7PD4c 00:20:36.979 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:37.239 14:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:37.499 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hxCXtfNHT1 00:20:37.499 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hxCXtfNHT1 00:20:37.499 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.773 [2024-06-10 14:29:15.220168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.773 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.043 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.044 [2024-06-10 14:29:15.609136] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.044 [2024-06-10 14:29:15.609341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.044 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.304 malloc0 00:20:38.304 14:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.564 14:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hxCXtfNHT1 00:20:38.825 [2024-06-10 14:29:16.193370] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:38.825 14:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hxCXtfNHT1 00:20:38.825 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.820 Initializing NVMe Controllers 00:20:48.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.820 Initialization complete. Launching workers. 00:20:48.820 ======================================================== 00:20:48.820 Latency(us) 00:20:48.820 Device Information : IOPS MiB/s Average min max 00:20:48.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13542.48 52.90 4726.45 1083.22 5432.53 00:20:48.820 ======================================================== 00:20:48.820 Total : 13542.48 52.90 4726.45 1083.22 5432.53 00:20:48.820 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hxCXtfNHT1 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hxCXtfNHT1' 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3068127 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3068127 /var/tmp/bdevperf.sock 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3068127 ']' 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:48.820 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.820 [2024-06-10 14:29:26.381681] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:20:48.820 [2024-06-10 14:29:26.381735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068127 ] 00:20:48.820 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.080 [2024-06-10 14:29:26.430058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.080 [2024-06-10 14:29:26.483425] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.080 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:49.080 14:29:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:49.080 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hxCXtfNHT1 00:20:49.340 [2024-06-10 14:29:26.686964] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.340 [2024-06-10 14:29:26.687022] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.340 TLSTESTn1 00:20:49.340 14:29:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.340 Running I/O for 10 seconds... 00:20:59.340 00:20:59.340 Latency(us) 00:20:59.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.340 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.340 Verification LBA range: start 0x0 length 0x2000 00:20:59.340 TLSTESTn1 : 10.02 4164.98 16.27 0.00 0.00 30695.93 4505.60 66846.72 00:20:59.340 =================================================================================================================== 00:20:59.340 Total : 4164.98 16.27 0.00 0.00 30695.93 4505.60 66846.72 00:20:59.340 0 00:20:59.340 14:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.340 14:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3068127 00:20:59.340 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3068127 ']' 00:20:59.340 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3068127 00:20:59.340 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3068127 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3068127' 00:20:59.602 killing process with pid 3068127 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3068127 00:20:59.602 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.602 00:20:59.602 Latency(us) 00:20:59.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.602 =================================================================================================================== 00:20:59.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.602 [2024-06-10 14:29:36.986448] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.602 14:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3068127 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.poxRt7PD4c 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.poxRt7PD4c 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.poxRt7PD4c 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.poxRt7PD4c' 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070238 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070238 /var/tmp/bdevperf.sock 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070238 ']' 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:59.602 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.602 [2024-06-10 14:29:37.149730] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:20:59.602 [2024-06-10 14:29:37.149785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070238 ] 00:20:59.602 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.863 [2024-06-10 14:29:37.198085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.863 [2024-06-10 14:29:37.249990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.863 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.863 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:59.863 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.poxRt7PD4c 00:21:00.124 [2024-06-10 14:29:37.501422] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.124 [2024-06-10 14:29:37.501478] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.124 [2024-06-10 14:29:37.506399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.124 [2024-06-10 14:29:37.507387] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1771de0 (107): Transport endpoint is not connected 00:21:00.124 [2024-06-10 14:29:37.508382] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1771de0 (9): Bad file descriptor 00:21:00.124 [2024-06-10 14:29:37.509384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.124 [2024-06-10 14:29:37.509392] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.124 [2024-06-10 14:29:37.509399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.124 request: 00:21:00.124 { 00:21:00.124 "name": "TLSTEST", 00:21:00.124 "trtype": "tcp", 00:21:00.124 "traddr": "10.0.0.2", 00:21:00.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.124 "adrfam": "ipv4", 00:21:00.124 "trsvcid": "4420", 00:21:00.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.124 "psk": "/tmp/tmp.poxRt7PD4c", 00:21:00.124 "method": "bdev_nvme_attach_controller", 00:21:00.124 "req_id": 1 00:21:00.124 } 00:21:00.124 Got JSON-RPC error response 00:21:00.124 response: 00:21:00.124 { 00:21:00.124 "code": -5, 00:21:00.124 "message": "Input/output error" 00:21:00.124 } 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3070238 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070238 ']' 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070238 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070238 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070238' 00:21:00.124 killing process with pid 3070238 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070238 00:21:00.124 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.124 00:21:00.124 Latency(us) 00:21:00.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.124 =================================================================================================================== 00:21:00.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.124 [2024-06-10 14:29:37.584278] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070238 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:00.124 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hxCXtfNHT1 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hxCXtfNHT1 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hxCXtfNHT1 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hxCXtfNHT1' 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070251 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070251 /var/tmp/bdevperf.sock 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070251 ']' 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:00.125 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.385 [2024-06-10 14:29:37.740545] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:00.385 [2024-06-10 14:29:37.740601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070251 ] 00:21:00.385 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.385 [2024-06-10 14:29:37.790451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.385 [2024-06-10 14:29:37.842076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.385 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:00.385 14:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:00.385 14:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hxCXtfNHT1 00:21:00.645 [2024-06-10 14:29:38.109627] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.645 [2024-06-10 14:29:38.109694] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.645 [2024-06-10 14:29:38.117805] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:00.645 [2024-06-10 14:29:38.117827] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:00.645 [2024-06-10 14:29:38.117852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.645 [2024-06-10 14:29:38.118832] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fbde0 (107): Transport endpoint is not connected 00:21:00.645 [2024-06-10 14:29:38.119827] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fbde0 (9): Bad file descriptor 00:21:00.645 [2024-06-10 14:29:38.120829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.645 [2024-06-10 14:29:38.120835] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.645 [2024-06-10 14:29:38.120842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.645 request: 00:21:00.645 { 00:21:00.645 "name": "TLSTEST", 00:21:00.645 "trtype": "tcp", 00:21:00.645 "traddr": "10.0.0.2", 00:21:00.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:00.645 "adrfam": "ipv4", 00:21:00.645 "trsvcid": "4420", 00:21:00.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.645 "psk": "/tmp/tmp.hxCXtfNHT1", 00:21:00.645 "method": "bdev_nvme_attach_controller", 00:21:00.645 "req_id": 1 00:21:00.645 } 00:21:00.645 Got JSON-RPC error response 00:21:00.645 response: 00:21:00.645 { 00:21:00.645 "code": -5, 00:21:00.645 "message": "Input/output error" 00:21:00.645 } 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3070251 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070251 ']' 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070251 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070251 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070251' 00:21:00.645 killing process with pid 3070251 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070251 00:21:00.645 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.645 00:21:00.645 Latency(us) 00:21:00.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.645 =================================================================================================================== 00:21:00.645 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.645 [2024-06-10 14:29:38.203674] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.645 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070251 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hxCXtfNHT1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hxCXtfNHT1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hxCXtfNHT1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hxCXtfNHT1' 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.907 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070422 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070422 /var/tmp/bdevperf.sock 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070422 ']' 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:00.908 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.908 [2024-06-10 14:29:38.359547] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:00.908 [2024-06-10 14:29:38.359602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070422 ] 00:21:00.908 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.908 [2024-06-10 14:29:38.409642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.908 [2024-06-10 14:29:38.461461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.169 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:01.169 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:01.169 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hxCXtfNHT1 00:21:01.169 [2024-06-10 14:29:38.729018] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.169 [2024-06-10 14:29:38.729082] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.169 [2024-06-10 14:29:38.739326] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.169 [2024-06-10 14:29:38.739348] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.169 [2024-06-10 14:29:38.739371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.169 [2024-06-10 14:29:38.740119] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ecde0 (107): Transport endpoint is not connected 00:21:01.169 [2024-06-10 14:29:38.741114] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ecde0 (9): Bad file descriptor 00:21:01.169 [2024-06-10 14:29:38.742116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:01.169 [2024-06-10 14:29:38.742122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.169 [2024-06-10 14:29:38.742129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:01.169 request: 00:21:01.169 { 00:21:01.169 "name": "TLSTEST", 00:21:01.169 "trtype": "tcp", 00:21:01.169 "traddr": "10.0.0.2", 00:21:01.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.169 "adrfam": "ipv4", 00:21:01.169 "trsvcid": "4420", 00:21:01.169 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.169 "psk": "/tmp/tmp.hxCXtfNHT1", 00:21:01.169 "method": "bdev_nvme_attach_controller", 00:21:01.169 "req_id": 1 00:21:01.169 } 00:21:01.169 Got JSON-RPC error response 00:21:01.169 response: 00:21:01.169 { 00:21:01.169 "code": -5, 00:21:01.169 "message": "Input/output error" 00:21:01.169 } 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3070422 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070422 ']' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070422 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070422 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070422' 00:21:01.429 killing process with pid 3070422 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070422 00:21:01.429 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.429 00:21:01.429 Latency(us) 00:21:01.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.429 =================================================================================================================== 00:21:01.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.429 [2024-06-10 14:29:38.829180] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070422 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.429 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070597 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070597 /var/tmp/bdevperf.sock 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070597 ']' 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:01.430 14:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.430 [2024-06-10 14:29:38.994528] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:01.430 [2024-06-10 14:29:38.994582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070597 ] 00:21:01.430 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.690 [2024-06-10 14:29:39.044370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.690 [2024-06-10 14:29:39.095793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.690 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:01.690 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:01.690 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:01.951 [2024-06-10 14:29:39.370907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.951 [2024-06-10 14:29:39.372880] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a2820 (9): Bad file descriptor 00:21:01.951 [2024-06-10 14:29:39.373879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.951 [2024-06-10 14:29:39.373886] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.951 [2024-06-10 14:29:39.373893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.951 request: 00:21:01.951 { 00:21:01.951 "name": "TLSTEST", 00:21:01.951 "trtype": "tcp", 00:21:01.951 "traddr": "10.0.0.2", 00:21:01.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.951 "adrfam": "ipv4", 00:21:01.951 "trsvcid": "4420", 00:21:01.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.951 "method": "bdev_nvme_attach_controller", 00:21:01.951 "req_id": 1 00:21:01.951 } 00:21:01.951 Got JSON-RPC error response 00:21:01.951 response: 00:21:01.951 { 00:21:01.951 "code": -5, 00:21:01.951 "message": "Input/output error" 00:21:01.951 } 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3070597 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070597 ']' 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070597 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070597 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070597' 00:21:01.951 killing process with pid 3070597 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070597 00:21:01.951 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.951 00:21:01.951 Latency(us) 00:21:01.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.951 =================================================================================================================== 00:21:01.951 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.951 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070597 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3065159 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3065159 ']' 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3065159 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3065159 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3065159' 00:21:02.212 killing process with pid 3065159 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3065159 00:21:02.212 [2024-06-10 14:29:39.615677] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3065159 00:21:02.212 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:02.213 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.eqjKDGyyXj 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.eqjKDGyyXj 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.473 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3070670 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3070670 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070670 ']' 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:02.474 14:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.474 [2024-06-10 14:29:39.872783] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:02.474 [2024-06-10 14:29:39.872840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.474 [2024-06-10 14:29:39.940010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.474 [2024-06-10 14:29:40.007612] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.474 [2024-06-10 14:29:40.007649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.474 [2024-06-10 14:29:40.007656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.474 [2024-06-10 14:29:40.007663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.474 [2024-06-10 14:29:40.007668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.474 [2024-06-10 14:29:40.007693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eqjKDGyyXj 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:02.734 [2024-06-10 14:29:40.311287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.734 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.994 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.254 [2024-06-10 14:29:40.712284] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.254 [2024-06-10 14:29:40.712480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.254 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.514 malloc0 00:21:03.514 14:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:03.774 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:03.775 [2024-06-10 14:29:41.316270] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqjKDGyyXj 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eqjKDGyyXj' 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070985 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070985 /var/tmp/bdevperf.sock 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3070985 ']' 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:03.775 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.035 [2024-06-10 14:29:41.379420] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:04.035 [2024-06-10 14:29:41.379472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070985 ] 00:21:04.035 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.035 [2024-06-10 14:29:41.430864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.035 [2024-06-10 14:29:41.483417] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.035 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:04.035 14:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:04.035 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:04.296 [2024-06-10 14:29:41.703005] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.296 [2024-06-10 14:29:41.703069] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:04.296 TLSTESTn1 00:21:04.296 14:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:04.556 Running I/O for 10 seconds... 00:21:14.559 00:21:14.559 Latency(us) 00:21:14.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.559 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:14.559 Verification LBA range: start 0x0 length 0x2000 00:21:14.559 TLSTESTn1 : 10.02 4141.28 16.18 0.00 0.00 30870.98 4478.29 76021.76 00:21:14.559 =================================================================================================================== 00:21:14.559 Total : 4141.28 16.18 0.00 0.00 30870.98 4478.29 76021.76 00:21:14.559 0 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3070985 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070985 ']' 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070985 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:14.559 14:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070985 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070985' 00:21:14.559 killing process with pid 3070985 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070985 00:21:14.559 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.559 00:21:14.559 Latency(us) 00:21:14.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.559 =================================================================================================================== 00:21:14.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.559 [2024-06-10 14:29:52.023553] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070985 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.eqjKDGyyXj 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqjKDGyyXj 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqjKDGyyXj 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:14.559 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eqjKDGyyXj 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eqjKDGyyXj' 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3073136 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3073136 /var/tmp/bdevperf.sock 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3073136 ']' 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:14.560 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.821 [2024-06-10 14:29:52.192811] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:14.821 [2024-06-10 14:29:52.192868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073136 ] 00:21:14.821 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.821 [2024-06-10 14:29:52.242886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.821 [2024-06-10 14:29:52.295093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.821 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:14.821 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:14.821 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:15.081 [2024-06-10 14:29:52.558734] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.082 [2024-06-10 14:29:52.558776] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:15.082 [2024-06-10 14:29:52.558782] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.eqjKDGyyXj 00:21:15.082 request: 00:21:15.082 { 00:21:15.082 "name": "TLSTEST", 00:21:15.082 "trtype": "tcp", 00:21:15.082 "traddr": "10.0.0.2", 00:21:15.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.082 "adrfam": "ipv4", 00:21:15.082 "trsvcid": "4420", 00:21:15.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.082 "psk": "/tmp/tmp.eqjKDGyyXj", 00:21:15.082 "method": "bdev_nvme_attach_controller", 00:21:15.082 "req_id": 1 00:21:15.082 } 00:21:15.082 Got JSON-RPC error response 00:21:15.082 response: 00:21:15.082 { 00:21:15.082 "code": -1, 00:21:15.082 "message": "Operation not permitted" 00:21:15.082 } 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3073136 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3073136 ']' 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3073136 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3073136 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3073136' 00:21:15.082 killing process with pid 3073136 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3073136 00:21:15.082 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.082 00:21:15.082 Latency(us) 00:21:15.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.082 =================================================================================================================== 00:21:15.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:15.082 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3073136 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3070670 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3070670 ']' 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3070670 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3070670 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3070670' 00:21:15.343 killing process with pid 3070670 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3070670 00:21:15.343 [2024-06-10 14:29:52.805845] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:15.343 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3070670 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3073333 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3073333 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3073333 ']' 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:15.605 14:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.605 [2024-06-10 14:29:53.000287] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:15.605 [2024-06-10 14:29:53.000347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.605 [2024-06-10 14:29:53.063585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.605 [2024-06-10 14:29:53.127919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.605 [2024-06-10 14:29:53.127953] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.605 [2024-06-10 14:29:53.127961] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.605 [2024-06-10 14:29:53.127967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.605 [2024-06-10 14:29:53.127972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.605 [2024-06-10 14:29:53.127990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eqjKDGyyXj 00:21:15.867 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:15.867 [2024-06-10 14:29:53.448961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.178 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.178 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.451 [2024-06-10 14:29:53.845961] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.451 [2024-06-10 14:29:53.846153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.451 14:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.712 malloc0 00:21:16.712 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.712 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:16.972 [2024-06-10 14:29:54.458192] tcp.c:3581:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:16.972 [2024-06-10 14:29:54.458214] tcp.c:3667:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:16.972 [2024-06-10 14:29:54.458241] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:16.972 request: 00:21:16.972 { 00:21:16.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.973 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.973 "psk": "/tmp/tmp.eqjKDGyyXj", 00:21:16.973 "method": "nvmf_subsystem_add_host", 00:21:16.973 "req_id": 1 00:21:16.973 } 00:21:16.973 Got JSON-RPC error response 00:21:16.973 response: 00:21:16.973 { 00:21:16.973 "code": -32603, 00:21:16.973 "message": "Internal error" 00:21:16.973 } 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3073333 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3073333 ']' 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3073333 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3073333 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3073333' 00:21:16.973 killing process with pid 3073333 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3073333 00:21:16.973 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3073333 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.eqjKDGyyXj 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3073708 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3073708 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3073708 ']' 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.234 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:17.235 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.235 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:17.235 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.235 [2024-06-10 14:29:54.729944] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:17.235 [2024-06-10 14:29:54.729994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.235 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.235 [2024-06-10 14:29:54.796899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.496 [2024-06-10 14:29:54.858531] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.496 [2024-06-10 14:29:54.858567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.496 [2024-06-10 14:29:54.858575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.496 [2024-06-10 14:29:54.858581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.496 [2024-06-10 14:29:54.858586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.496 [2024-06-10 14:29:54.858611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eqjKDGyyXj 00:21:17.496 14:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.757 [2024-06-10 14:29:55.163686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.757 14:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.017 14:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.017 [2024-06-10 14:29:55.572719] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.017 [2024-06-10 14:29:55.572917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.017 14:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.276 malloc0 00:21:18.276 14:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.536 14:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:18.796 [2024-06-10 14:29:56.172910] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3074056 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3074056 /var/tmp/bdevperf.sock 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3074056 ']' 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:18.796 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.796 [2024-06-10 14:29:56.235443] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:18.796 [2024-06-10 14:29:56.235495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074056 ] 00:21:18.796 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.796 [2024-06-10 14:29:56.285233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.796 [2024-06-10 14:29:56.337763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.056 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:19.056 14:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:19.056 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:19.056 [2024-06-10 14:29:56.601351] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.056 [2024-06-10 14:29:56.601407] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:19.316 TLSTESTn1 00:21:19.316 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:19.576 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:19.576 "subsystems": [ 00:21:19.576 { 00:21:19.576 "subsystem": "keyring", 00:21:19.576 "config": [] 00:21:19.576 }, 00:21:19.576 { 00:21:19.576 "subsystem": "iobuf", 00:21:19.576 "config": [ 00:21:19.576 { 00:21:19.576 "method": "iobuf_set_options", 00:21:19.576 "params": { 00:21:19.576 "small_pool_count": 8192, 00:21:19.576 "large_pool_count": 1024, 00:21:19.576 "small_bufsize": 8192, 00:21:19.576 "large_bufsize": 135168 00:21:19.576 } 00:21:19.576 } 00:21:19.576 ] 00:21:19.576 }, 00:21:19.576 { 00:21:19.576 "subsystem": "sock", 00:21:19.576 "config": [ 00:21:19.576 { 00:21:19.576 "method": "sock_set_default_impl", 00:21:19.576 "params": { 00:21:19.576 "impl_name": "posix" 00:21:19.576 } 00:21:19.576 }, 00:21:19.577 { 00:21:19.577 "method": "sock_impl_set_options", 00:21:19.577 "params": { 00:21:19.577 "impl_name": "ssl", 00:21:19.577 "recv_buf_size": 4096, 00:21:19.577 "send_buf_size": 4096, 00:21:19.577 "enable_recv_pipe": true, 00:21:19.577 "enable_quickack": false, 00:21:19.577 "enable_placement_id": 0, 00:21:19.577 "enable_zerocopy_send_server": true, 00:21:19.577 "enable_zerocopy_send_client": false, 00:21:19.577 "zerocopy_threshold": 0, 00:21:19.577 "tls_version": 0, 00:21:19.577 "enable_ktls": false 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "sock_impl_set_options", 00:21:19.577 "params": { 00:21:19.577 "impl_name": "posix", 00:21:19.577 "recv_buf_size": 2097152, 00:21:19.577 "send_buf_size": 2097152, 00:21:19.577 "enable_recv_pipe": true, 00:21:19.577 "enable_quickack": false, 00:21:19.577 "enable_placement_id": 0, 00:21:19.577 "enable_zerocopy_send_server": true, 00:21:19.577 "enable_zerocopy_send_client": false, 00:21:19.577 "zerocopy_threshold": 0, 00:21:19.577 "tls_version": 0, 00:21:19.577 "enable_ktls": false 00:21:19.577 } 00:21:19.577 } 00:21:19.577 ] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "vmd", 00:21:19.577 "config": [] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "accel", 00:21:19.577 "config": [ 00:21:19.577 { 00:21:19.577 "method": "accel_set_options", 00:21:19.577 "params": { 00:21:19.577 "small_cache_size": 128, 00:21:19.577 "large_cache_size": 16, 00:21:19.577 "task_count": 2048, 00:21:19.577 "sequence_count": 2048, 00:21:19.577 "buf_count": 2048 00:21:19.577 } 00:21:19.577 } 00:21:19.577 ] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "bdev", 00:21:19.577 "config": [ 00:21:19.577 { 00:21:19.577 "method": "bdev_set_options", 00:21:19.577 "params": { 00:21:19.577 "bdev_io_pool_size": 65535, 00:21:19.577 "bdev_io_cache_size": 256, 00:21:19.577 "bdev_auto_examine": true, 00:21:19.577 "iobuf_small_cache_size": 128, 00:21:19.577 "iobuf_large_cache_size": 16 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_raid_set_options", 00:21:19.577 "params": { 00:21:19.577 "process_window_size_kb": 1024 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_iscsi_set_options", 00:21:19.577 "params": { 00:21:19.577 "timeout_sec": 30 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_nvme_set_options", 00:21:19.577 "params": { 00:21:19.577 "action_on_timeout": "none", 00:21:19.577 "timeout_us": 0, 00:21:19.577 "timeout_admin_us": 0, 00:21:19.577 "keep_alive_timeout_ms": 10000, 00:21:19.577 "arbitration_burst": 0, 00:21:19.577 "low_priority_weight": 0, 00:21:19.577 "medium_priority_weight": 0, 00:21:19.577 "high_priority_weight": 0, 00:21:19.577 "nvme_adminq_poll_period_us": 10000, 00:21:19.577 "nvme_ioq_poll_period_us": 0, 00:21:19.577 "io_queue_requests": 0, 00:21:19.577 "delay_cmd_submit": true, 00:21:19.577 "transport_retry_count": 4, 00:21:19.577 "bdev_retry_count": 3, 00:21:19.577 "transport_ack_timeout": 0, 00:21:19.577 "ctrlr_loss_timeout_sec": 0, 00:21:19.577 "reconnect_delay_sec": 0, 00:21:19.577 "fast_io_fail_timeout_sec": 0, 00:21:19.577 "disable_auto_failback": false, 00:21:19.577 "generate_uuids": false, 00:21:19.577 "transport_tos": 0, 00:21:19.577 "nvme_error_stat": false, 00:21:19.577 "rdma_srq_size": 0, 00:21:19.577 "io_path_stat": false, 00:21:19.577 "allow_accel_sequence": false, 00:21:19.577 "rdma_max_cq_size": 0, 00:21:19.577 "rdma_cm_event_timeout_ms": 0, 00:21:19.577 "dhchap_digests": [ 00:21:19.577 "sha256", 00:21:19.577 "sha384", 00:21:19.577 "sha512" 00:21:19.577 ], 00:21:19.577 "dhchap_dhgroups": [ 00:21:19.577 "null", 00:21:19.577 "ffdhe2048", 00:21:19.577 "ffdhe3072", 00:21:19.577 "ffdhe4096", 00:21:19.577 "ffdhe6144", 00:21:19.577 "ffdhe8192" 00:21:19.577 ] 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_nvme_set_hotplug", 00:21:19.577 "params": { 00:21:19.577 "period_us": 100000, 00:21:19.577 "enable": false 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_malloc_create", 00:21:19.577 "params": { 00:21:19.577 "name": "malloc0", 00:21:19.577 "num_blocks": 8192, 00:21:19.577 "block_size": 4096, 00:21:19.577 "physical_block_size": 4096, 00:21:19.577 "uuid": "cc6e6f9f-f6bb-4d21-a9b6-95919c86fcdf", 00:21:19.577 "optimal_io_boundary": 0 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "bdev_wait_for_examine" 00:21:19.577 } 00:21:19.577 ] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "nbd", 00:21:19.577 "config": [] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "scheduler", 00:21:19.577 "config": [ 00:21:19.577 { 00:21:19.577 "method": "framework_set_scheduler", 00:21:19.577 "params": { 00:21:19.577 "name": "static" 00:21:19.577 } 00:21:19.577 } 00:21:19.577 ] 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "subsystem": "nvmf", 00:21:19.577 "config": [ 00:21:19.577 { 00:21:19.577 "method": "nvmf_set_config", 00:21:19.577 "params": { 00:21:19.577 "discovery_filter": "match_any", 00:21:19.577 "admin_cmd_passthru": { 00:21:19.577 "identify_ctrlr": false 00:21:19.577 } 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_set_max_subsystems", 00:21:19.577 "params": { 00:21:19.577 "max_subsystems": 1024 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_set_crdt", 00:21:19.577 "params": { 00:21:19.577 "crdt1": 0, 00:21:19.577 "crdt2": 0, 00:21:19.577 "crdt3": 0 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_create_transport", 00:21:19.577 "params": { 00:21:19.577 "trtype": "TCP", 00:21:19.577 "max_queue_depth": 128, 00:21:19.577 "max_io_qpairs_per_ctrlr": 127, 00:21:19.577 "in_capsule_data_size": 4096, 00:21:19.577 "max_io_size": 131072, 00:21:19.577 "io_unit_size": 131072, 00:21:19.577 "max_aq_depth": 128, 00:21:19.577 "num_shared_buffers": 511, 00:21:19.577 "buf_cache_size": 4294967295, 00:21:19.577 "dif_insert_or_strip": false, 00:21:19.577 "zcopy": false, 00:21:19.577 "c2h_success": false, 00:21:19.577 "sock_priority": 0, 00:21:19.577 "abort_timeout_sec": 1, 00:21:19.577 "ack_timeout": 0, 00:21:19.577 "data_wr_pool_size": 0 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_create_subsystem", 00:21:19.577 "params": { 00:21:19.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.577 "allow_any_host": false, 00:21:19.577 "serial_number": "SPDK00000000000001", 00:21:19.577 "model_number": "SPDK bdev Controller", 00:21:19.577 "max_namespaces": 10, 00:21:19.577 "min_cntlid": 1, 00:21:19.577 "max_cntlid": 65519, 00:21:19.577 "ana_reporting": false 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_subsystem_add_host", 00:21:19.577 "params": { 00:21:19.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.577 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.577 "psk": "/tmp/tmp.eqjKDGyyXj" 00:21:19.577 } 00:21:19.577 }, 00:21:19.577 { 00:21:19.577 "method": "nvmf_subsystem_add_ns", 00:21:19.577 "params": { 00:21:19.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.577 "namespace": { 00:21:19.577 "nsid": 1, 00:21:19.577 "bdev_name": "malloc0", 00:21:19.577 "nguid": "CC6E6F9FF6BB4D21A9B695919C86FCDF", 00:21:19.577 "uuid": "cc6e6f9f-f6bb-4d21-a9b6-95919c86fcdf", 00:21:19.577 "no_auto_visible": false 00:21:19.578 } 00:21:19.578 } 00:21:19.578 }, 00:21:19.578 { 00:21:19.578 "method": "nvmf_subsystem_add_listener", 00:21:19.578 "params": { 00:21:19.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.578 "listen_address": { 00:21:19.578 "trtype": "TCP", 00:21:19.578 "adrfam": "IPv4", 00:21:19.578 "traddr": "10.0.0.2", 00:21:19.578 "trsvcid": "4420" 00:21:19.578 }, 00:21:19.578 "secure_channel": true 00:21:19.578 } 00:21:19.578 } 00:21:19.578 ] 00:21:19.578 } 00:21:19.578 ] 00:21:19.578 }' 00:21:19.578 14:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:19.838 14:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:19.838 "subsystems": [ 00:21:19.838 { 00:21:19.838 "subsystem": "keyring", 00:21:19.838 "config": [] 00:21:19.838 }, 00:21:19.838 { 00:21:19.838 "subsystem": "iobuf", 00:21:19.838 "config": [ 00:21:19.838 { 00:21:19.838 "method": "iobuf_set_options", 00:21:19.838 "params": { 00:21:19.838 "small_pool_count": 8192, 00:21:19.838 "large_pool_count": 1024, 00:21:19.838 "small_bufsize": 8192, 00:21:19.838 "large_bufsize": 135168 00:21:19.838 } 00:21:19.838 } 00:21:19.838 ] 00:21:19.838 }, 00:21:19.838 { 00:21:19.838 "subsystem": "sock", 00:21:19.838 "config": [ 00:21:19.838 { 00:21:19.838 "method": "sock_set_default_impl", 00:21:19.838 "params": { 00:21:19.839 "impl_name": "posix" 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "sock_impl_set_options", 00:21:19.839 "params": { 00:21:19.839 "impl_name": "ssl", 00:21:19.839 "recv_buf_size": 4096, 00:21:19.839 "send_buf_size": 4096, 00:21:19.839 "enable_recv_pipe": true, 00:21:19.839 "enable_quickack": false, 00:21:19.839 "enable_placement_id": 0, 00:21:19.839 "enable_zerocopy_send_server": true, 00:21:19.839 "enable_zerocopy_send_client": false, 00:21:19.839 "zerocopy_threshold": 0, 00:21:19.839 "tls_version": 0, 00:21:19.839 "enable_ktls": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "sock_impl_set_options", 00:21:19.839 "params": { 00:21:19.839 "impl_name": "posix", 00:21:19.839 "recv_buf_size": 2097152, 00:21:19.839 "send_buf_size": 2097152, 00:21:19.839 "enable_recv_pipe": true, 00:21:19.839 "enable_quickack": false, 00:21:19.839 "enable_placement_id": 0, 00:21:19.839 "enable_zerocopy_send_server": true, 00:21:19.839 "enable_zerocopy_send_client": false, 00:21:19.839 "zerocopy_threshold": 0, 00:21:19.839 "tls_version": 0, 00:21:19.839 "enable_ktls": false 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "vmd", 00:21:19.839 "config": [] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "accel", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "accel_set_options", 00:21:19.839 "params": { 00:21:19.839 "small_cache_size": 128, 00:21:19.839 "large_cache_size": 16, 00:21:19.839 "task_count": 2048, 00:21:19.839 "sequence_count": 2048, 00:21:19.839 "buf_count": 2048 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "bdev", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "bdev_set_options", 00:21:19.839 "params": { 00:21:19.839 "bdev_io_pool_size": 65535, 00:21:19.839 "bdev_io_cache_size": 256, 00:21:19.839 "bdev_auto_examine": true, 00:21:19.839 "iobuf_small_cache_size": 128, 00:21:19.839 "iobuf_large_cache_size": 16 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_raid_set_options", 00:21:19.839 "params": { 00:21:19.839 "process_window_size_kb": 1024 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_iscsi_set_options", 00:21:19.839 "params": { 00:21:19.839 "timeout_sec": 30 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_nvme_set_options", 00:21:19.839 "params": { 00:21:19.839 "action_on_timeout": "none", 00:21:19.839 "timeout_us": 0, 00:21:19.839 "timeout_admin_us": 0, 00:21:19.839 "keep_alive_timeout_ms": 10000, 00:21:19.839 "arbitration_burst": 0, 00:21:19.839 "low_priority_weight": 0, 00:21:19.839 "medium_priority_weight": 0, 00:21:19.839 "high_priority_weight": 0, 00:21:19.839 "nvme_adminq_poll_period_us": 10000, 00:21:19.839 "nvme_ioq_poll_period_us": 0, 00:21:19.839 "io_queue_requests": 512, 00:21:19.839 "delay_cmd_submit": true, 00:21:19.839 "transport_retry_count": 4, 00:21:19.839 "bdev_retry_count": 3, 00:21:19.839 "transport_ack_timeout": 0, 00:21:19.839 "ctrlr_loss_timeout_sec": 0, 00:21:19.839 "reconnect_delay_sec": 0, 00:21:19.839 "fast_io_fail_timeout_sec": 0, 00:21:19.839 "disable_auto_failback": false, 00:21:19.839 "generate_uuids": false, 00:21:19.839 "transport_tos": 0, 00:21:19.839 "nvme_error_stat": false, 00:21:19.839 "rdma_srq_size": 0, 00:21:19.839 "io_path_stat": false, 00:21:19.839 "allow_accel_sequence": false, 00:21:19.839 "rdma_max_cq_size": 0, 00:21:19.839 "rdma_cm_event_timeout_ms": 0, 00:21:19.839 "dhchap_digests": [ 00:21:19.839 "sha256", 00:21:19.839 "sha384", 00:21:19.839 "sha512" 00:21:19.839 ], 00:21:19.839 "dhchap_dhgroups": [ 00:21:19.839 "null", 00:21:19.839 "ffdhe2048", 00:21:19.839 "ffdhe3072", 00:21:19.839 "ffdhe4096", 00:21:19.839 "ffdhe6144", 00:21:19.839 "ffdhe8192" 00:21:19.839 ] 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_nvme_attach_controller", 00:21:19.839 "params": { 00:21:19.839 "name": "TLSTEST", 00:21:19.839 "trtype": "TCP", 00:21:19.839 "adrfam": "IPv4", 00:21:19.839 "traddr": "10.0.0.2", 00:21:19.839 "trsvcid": "4420", 00:21:19.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.839 "prchk_reftag": false, 00:21:19.839 "prchk_guard": false, 00:21:19.839 "ctrlr_loss_timeout_sec": 0, 00:21:19.839 "reconnect_delay_sec": 0, 00:21:19.839 "fast_io_fail_timeout_sec": 0, 00:21:19.839 "psk": "/tmp/tmp.eqjKDGyyXj", 00:21:19.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.839 "hdgst": false, 00:21:19.839 "ddgst": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_nvme_set_hotplug", 00:21:19.839 "params": { 00:21:19.839 "period_us": 100000, 00:21:19.839 "enable": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_wait_for_examine" 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "nbd", 00:21:19.839 "config": [] 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }' 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3074056 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3074056 ']' 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3074056 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3074056 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3074056' 00:21:19.839 killing process with pid 3074056 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3074056 00:21:19.839 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.839 00:21:19.839 Latency(us) 00:21:19.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.839 =================================================================================================================== 00:21:19.839 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.839 [2024-06-10 14:29:57.327623] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:19.839 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3074056 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3073708 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3073708 ']' 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3073708 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3073708 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3073708' 00:21:20.100 killing process with pid 3073708 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3073708 00:21:20.100 [2024-06-10 14:29:57.495696] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3073708 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 14:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:20.100 "subsystems": [ 00:21:20.100 { 00:21:20.100 "subsystem": "keyring", 00:21:20.100 "config": [] 00:21:20.100 }, 00:21:20.100 { 00:21:20.100 "subsystem": "iobuf", 00:21:20.100 "config": [ 00:21:20.100 { 00:21:20.100 "method": "iobuf_set_options", 00:21:20.100 "params": { 00:21:20.100 "small_pool_count": 8192, 00:21:20.100 "large_pool_count": 1024, 00:21:20.100 "small_bufsize": 8192, 00:21:20.100 "large_bufsize": 135168 00:21:20.100 } 00:21:20.100 } 00:21:20.100 ] 00:21:20.100 }, 00:21:20.100 { 00:21:20.100 "subsystem": "sock", 00:21:20.100 "config": [ 00:21:20.100 { 00:21:20.100 "method": "sock_set_default_impl", 00:21:20.100 "params": { 00:21:20.101 "impl_name": "posix" 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "sock_impl_set_options", 00:21:20.101 "params": { 00:21:20.101 "impl_name": "ssl", 00:21:20.101 "recv_buf_size": 4096, 00:21:20.101 "send_buf_size": 4096, 00:21:20.101 "enable_recv_pipe": true, 00:21:20.101 "enable_quickack": false, 00:21:20.101 "enable_placement_id": 0, 00:21:20.101 "enable_zerocopy_send_server": true, 00:21:20.101 "enable_zerocopy_send_client": false, 00:21:20.101 "zerocopy_threshold": 0, 00:21:20.101 "tls_version": 0, 00:21:20.101 "enable_ktls": false 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "sock_impl_set_options", 00:21:20.101 "params": { 00:21:20.101 "impl_name": "posix", 00:21:20.101 "recv_buf_size": 2097152, 00:21:20.101 "send_buf_size": 2097152, 00:21:20.101 "enable_recv_pipe": true, 00:21:20.101 "enable_quickack": false, 00:21:20.101 "enable_placement_id": 0, 00:21:20.101 "enable_zerocopy_send_server": true, 00:21:20.101 "enable_zerocopy_send_client": false, 00:21:20.101 "zerocopy_threshold": 0, 00:21:20.101 "tls_version": 0, 00:21:20.101 "enable_ktls": false 00:21:20.101 } 00:21:20.101 } 00:21:20.101 ] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "vmd", 00:21:20.101 "config": [] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "accel", 00:21:20.101 "config": [ 00:21:20.101 { 00:21:20.101 "method": "accel_set_options", 00:21:20.101 "params": { 00:21:20.101 "small_cache_size": 128, 00:21:20.101 "large_cache_size": 16, 00:21:20.101 "task_count": 2048, 00:21:20.101 "sequence_count": 2048, 00:21:20.101 "buf_count": 2048 00:21:20.101 } 00:21:20.101 } 00:21:20.101 ] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "bdev", 00:21:20.101 "config": [ 00:21:20.101 { 00:21:20.101 "method": "bdev_set_options", 00:21:20.101 "params": { 00:21:20.101 "bdev_io_pool_size": 65535, 00:21:20.101 "bdev_io_cache_size": 256, 00:21:20.101 "bdev_auto_examine": true, 00:21:20.101 "iobuf_small_cache_size": 128, 00:21:20.101 "iobuf_large_cache_size": 16 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_raid_set_options", 00:21:20.101 "params": { 00:21:20.101 "process_window_size_kb": 1024 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_iscsi_set_options", 00:21:20.101 "params": { 00:21:20.101 "timeout_sec": 30 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_nvme_set_options", 00:21:20.101 "params": { 00:21:20.101 "action_on_timeout": "none", 00:21:20.101 "timeout_us": 0, 00:21:20.101 "timeout_admin_us": 0, 00:21:20.101 "keep_alive_timeout_ms": 10000, 00:21:20.101 "arbitration_burst": 0, 00:21:20.101 "low_priority_weight": 0, 00:21:20.101 "medium_priority_weight": 0, 00:21:20.101 "high_priority_weight": 0, 00:21:20.101 "nvme_adminq_poll_period_us": 10000, 00:21:20.101 "nvme_ioq_poll_period_us": 0, 00:21:20.101 "io_queue_requests": 0, 00:21:20.101 "delay_cmd_submit": true, 00:21:20.101 "transport_retry_count": 4, 00:21:20.101 "bdev_retry_count": 3, 00:21:20.101 "transport_ack_timeout": 0, 00:21:20.101 "ctrlr_loss_timeout_sec": 0, 00:21:20.101 "reconnect_delay_sec": 0, 00:21:20.101 "fast_io_fail_timeout_sec": 0, 00:21:20.101 "disable_auto_failback": false, 00:21:20.101 "generate_uuids": false, 00:21:20.101 "transport_tos": 0, 00:21:20.101 "nvme_error_stat": false, 00:21:20.101 "rdma_srq_size": 0, 00:21:20.101 "io_path_stat": false, 00:21:20.101 "allow_accel_sequence": false, 00:21:20.101 "rdma_max_cq_size": 0, 00:21:20.101 "rdma_cm_event_timeout_ms": 0, 00:21:20.101 "dhchap_digests": [ 00:21:20.101 "sha256", 00:21:20.101 "sha384", 00:21:20.101 "sha512" 00:21:20.101 ], 00:21:20.101 "dhchap_dhgroups": [ 00:21:20.101 "null", 00:21:20.101 "ffdhe2048", 00:21:20.101 "ffdhe3072", 00:21:20.101 "ffdhe4096", 00:21:20.101 "ffdhe6144", 00:21:20.101 "ffdhe8192" 00:21:20.101 ] 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_nvme_set_hotplug", 00:21:20.101 "params": { 00:21:20.101 "period_us": 100000, 00:21:20.101 "enable": false 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_malloc_create", 00:21:20.101 "params": { 00:21:20.101 "name": "malloc0", 00:21:20.101 "num_blocks": 8192, 00:21:20.101 "block_size": 4096, 00:21:20.101 "physical_block_size": 4096, 00:21:20.101 "uuid": "cc6e6f9f-f6bb-4d21-a9b6-95919c86fcdf", 00:21:20.101 "optimal_io_boundary": 0 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "bdev_wait_for_examine" 00:21:20.101 } 00:21:20.101 ] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "nbd", 00:21:20.101 "config": [] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "scheduler", 00:21:20.101 "config": [ 00:21:20.101 { 00:21:20.101 "method": "framework_set_scheduler", 00:21:20.101 "params": { 00:21:20.101 "name": "static" 00:21:20.101 } 00:21:20.101 } 00:21:20.101 ] 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "subsystem": "nvmf", 00:21:20.101 "config": [ 00:21:20.101 { 00:21:20.101 "method": "nvmf_set_config", 00:21:20.101 "params": { 00:21:20.101 "discovery_filter": "match_any", 00:21:20.101 "admin_cmd_passthru": { 00:21:20.101 "identify_ctrlr": false 00:21:20.101 } 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_set_max_subsystems", 00:21:20.101 "params": { 00:21:20.101 "max_subsystems": 1024 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_set_crdt", 00:21:20.101 "params": { 00:21:20.101 "crdt1": 0, 00:21:20.101 "crdt2": 0, 00:21:20.101 "crdt3": 0 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_create_transport", 00:21:20.101 "params": { 00:21:20.101 "trtype": "TCP", 00:21:20.101 "max_queue_depth": 128, 00:21:20.101 "max_io_qpairs_per_ctrlr": 127, 00:21:20.101 "in_capsule_data_size": 4096, 00:21:20.101 "max_io_size": 131072, 00:21:20.101 "io_unit_size": 131072, 00:21:20.101 "max_aq_depth": 128, 00:21:20.101 "num_shared_buffers": 511, 00:21:20.101 "buf_cache_size": 4294967295, 00:21:20.101 "dif_insert_or_strip": false, 00:21:20.101 "zcopy": false, 00:21:20.101 "c2h_success": false, 00:21:20.101 "sock_priority": 0, 00:21:20.101 "abort_timeout_sec": 1, 00:21:20.101 "ack_timeout": 0, 00:21:20.101 "data_wr_pool_size": 0 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_create_subsystem", 00:21:20.101 "params": { 00:21:20.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.101 "allow_any_host": false, 00:21:20.101 "serial_number": "SPDK00000000000001", 00:21:20.101 "model_number": "SPDK bdev Controller", 00:21:20.101 "max_namespaces": 10, 00:21:20.101 "min_cntlid": 1, 00:21:20.101 "max_cntlid": 65519, 00:21:20.101 "ana_reporting": false 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_subsystem_add_host", 00:21:20.101 "params": { 00:21:20.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.101 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.101 "psk": "/tmp/tmp.eqjKDGyyXj" 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_subsystem_add_ns", 00:21:20.101 "params": { 00:21:20.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.101 "namespace": { 00:21:20.101 "nsid": 1, 00:21:20.101 "bdev_name": "malloc0", 00:21:20.101 "nguid": "CC6E6F9FF6BB4D21A9B695919C86FCDF", 00:21:20.101 "uuid": "cc6e6f9f-f6bb-4d21-a9b6-95919c86fcdf", 00:21:20.101 "no_auto_visible": false 00:21:20.101 } 00:21:20.101 } 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "method": "nvmf_subsystem_add_listener", 00:21:20.102 "params": { 00:21:20.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.102 "listen_address": { 00:21:20.102 "trtype": "TCP", 00:21:20.102 "adrfam": "IPv4", 00:21:20.102 "traddr": "10.0.0.2", 00:21:20.102 "trsvcid": "4420" 00:21:20.102 }, 00:21:20.102 "secure_channel": true 00:21:20.102 } 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 } 00:21:20.102 ] 00:21:20.102 }' 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3074395 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3074395 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3074395 ']' 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:20.102 14:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.102 [2024-06-10 14:29:57.691056] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:20.102 [2024-06-10 14:29:57.691111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.361 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.361 [2024-06-10 14:29:57.753966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.361 [2024-06-10 14:29:57.817809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.361 [2024-06-10 14:29:57.817843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.361 [2024-06-10 14:29:57.817850] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.361 [2024-06-10 14:29:57.817857] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.361 [2024-06-10 14:29:57.817862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.361 [2024-06-10 14:29:57.817918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.620 [2024-06-10 14:29:58.006826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.620 [2024-06-10 14:29:58.022765] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:20.620 [2024-06-10 14:29:58.038822] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.620 [2024-06-10 14:29:58.047606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3074444 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3074444 /var/tmp/bdevperf.sock 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3074444 ']' 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.190 14:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:21.190 "subsystems": [ 00:21:21.190 { 00:21:21.190 "subsystem": "keyring", 00:21:21.190 "config": [] 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "subsystem": "iobuf", 00:21:21.190 "config": [ 00:21:21.190 { 00:21:21.190 "method": "iobuf_set_options", 00:21:21.190 "params": { 00:21:21.190 "small_pool_count": 8192, 00:21:21.190 "large_pool_count": 1024, 00:21:21.190 "small_bufsize": 8192, 00:21:21.190 "large_bufsize": 135168 00:21:21.190 } 00:21:21.190 } 00:21:21.190 ] 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "subsystem": "sock", 00:21:21.190 "config": [ 00:21:21.190 { 00:21:21.190 "method": "sock_set_default_impl", 00:21:21.190 "params": { 00:21:21.190 "impl_name": "posix" 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "sock_impl_set_options", 00:21:21.190 "params": { 00:21:21.190 "impl_name": "ssl", 00:21:21.190 "recv_buf_size": 4096, 00:21:21.190 "send_buf_size": 4096, 00:21:21.190 "enable_recv_pipe": true, 00:21:21.190 "enable_quickack": false, 00:21:21.190 "enable_placement_id": 0, 00:21:21.190 "enable_zerocopy_send_server": true, 00:21:21.190 "enable_zerocopy_send_client": false, 00:21:21.190 "zerocopy_threshold": 0, 00:21:21.190 "tls_version": 0, 00:21:21.190 "enable_ktls": false 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "sock_impl_set_options", 00:21:21.190 "params": { 00:21:21.190 "impl_name": "posix", 00:21:21.190 "recv_buf_size": 2097152, 00:21:21.190 "send_buf_size": 2097152, 00:21:21.190 "enable_recv_pipe": true, 00:21:21.190 "enable_quickack": false, 00:21:21.190 "enable_placement_id": 0, 00:21:21.190 "enable_zerocopy_send_server": true, 00:21:21.190 "enable_zerocopy_send_client": false, 00:21:21.190 "zerocopy_threshold": 0, 00:21:21.190 "tls_version": 0, 00:21:21.190 "enable_ktls": false 00:21:21.190 } 00:21:21.190 } 00:21:21.190 ] 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "subsystem": "vmd", 00:21:21.190 "config": [] 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "subsystem": "accel", 00:21:21.190 "config": [ 00:21:21.190 { 00:21:21.190 "method": "accel_set_options", 00:21:21.190 "params": { 00:21:21.190 "small_cache_size": 128, 00:21:21.190 "large_cache_size": 16, 00:21:21.190 "task_count": 2048, 00:21:21.190 "sequence_count": 2048, 00:21:21.190 "buf_count": 2048 00:21:21.190 } 00:21:21.190 } 00:21:21.190 ] 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "subsystem": "bdev", 00:21:21.190 "config": [ 00:21:21.190 { 00:21:21.190 "method": "bdev_set_options", 00:21:21.190 "params": { 00:21:21.190 "bdev_io_pool_size": 65535, 00:21:21.190 "bdev_io_cache_size": 256, 00:21:21.190 "bdev_auto_examine": true, 00:21:21.190 "iobuf_small_cache_size": 128, 00:21:21.190 "iobuf_large_cache_size": 16 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "bdev_raid_set_options", 00:21:21.190 "params": { 00:21:21.190 "process_window_size_kb": 1024 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "bdev_iscsi_set_options", 00:21:21.190 "params": { 00:21:21.190 "timeout_sec": 30 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "bdev_nvme_set_options", 00:21:21.190 "params": { 00:21:21.190 "action_on_timeout": "none", 00:21:21.190 "timeout_us": 0, 00:21:21.190 "timeout_admin_us": 0, 00:21:21.190 "keep_alive_timeout_ms": 10000, 00:21:21.190 "arbitration_burst": 0, 00:21:21.190 "low_priority_weight": 0, 00:21:21.190 "medium_priority_weight": 0, 00:21:21.190 "high_priority_weight": 0, 00:21:21.190 "nvme_adminq_poll_period_us": 10000, 00:21:21.190 "nvme_ioq_poll_period_us": 0, 00:21:21.190 "io_queue_requests": 512, 00:21:21.190 "delay_cmd_submit": true, 00:21:21.190 "transport_retry_count": 4, 00:21:21.190 "bdev_retry_count": 3, 00:21:21.190 "transport_ack_timeout": 0, 00:21:21.190 "ctrlr_loss_timeout_sec": 0, 00:21:21.190 "reconnect_delay_sec": 0, 00:21:21.190 "fast_io_fail_timeout_sec": 0, 00:21:21.190 "disable_auto_failback": false, 00:21:21.190 "generate_uuids": false, 00:21:21.190 "transport_tos": 0, 00:21:21.190 "nvme_error_stat": false, 00:21:21.190 "rdma_srq_size": 0, 00:21:21.190 "io_path_stat": false, 00:21:21.190 "allow_accel_sequence": false, 00:21:21.190 "rdma_max_cq_size": 0, 00:21:21.190 "rdma_cm_event_timeout_ms": 0, 00:21:21.190 "dhchap_digests": [ 00:21:21.190 "sha256", 00:21:21.190 "sha384", 00:21:21.190 "sha512" 00:21:21.190 ], 00:21:21.190 "dhchap_dhgroups": [ 00:21:21.190 "null", 00:21:21.190 "ffdhe2048", 00:21:21.190 "ffdhe3072", 00:21:21.190 "ffdhe4096", 00:21:21.190 "ffdhe6144", 00:21:21.190 "ffdhe8192" 00:21:21.190 ] 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "bdev_nvme_attach_controller", 00:21:21.190 "params": { 00:21:21.190 "name": "TLSTEST", 00:21:21.190 "trtype": "TCP", 00:21:21.190 "adrfam": "IPv4", 00:21:21.190 "traddr": "10.0.0.2", 00:21:21.190 "trsvcid": "4420", 00:21:21.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.190 "prchk_reftag": false, 00:21:21.190 "prchk_guard": false, 00:21:21.190 "ctrlr_loss_timeout_sec": 0, 00:21:21.190 "reconnect_delay_sec": 0, 00:21:21.190 "fast_io_fail_timeout_sec": 0, 00:21:21.190 "psk": "/tmp/tmp.eqjKDGyyXj", 00:21:21.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.190 "hdgst": false, 00:21:21.190 "ddgst": false 00:21:21.190 } 00:21:21.190 }, 00:21:21.190 { 00:21:21.190 "method": "bdev_nvme_set_hotplug", 00:21:21.190 "params": { 00:21:21.190 "period_us": 100000, 00:21:21.190 "enable": false 00:21:21.190 } 00:21:21.191 }, 00:21:21.191 { 00:21:21.191 "method": "bdev_wait_for_examine" 00:21:21.191 } 00:21:21.191 ] 00:21:21.191 }, 00:21:21.191 { 00:21:21.191 "subsystem": "nbd", 00:21:21.191 "config": [] 00:21:21.191 } 00:21:21.191 ] 00:21:21.191 }' 00:21:21.191 [2024-06-10 14:29:58.642191] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:21.191 [2024-06-10 14:29:58.642240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074444 ] 00:21:21.191 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.191 [2024-06-10 14:29:58.690815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.191 [2024-06-10 14:29:58.745264] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.450 [2024-06-10 14:29:58.869821] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.450 [2024-06-10 14:29:58.869883] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:22.023 14:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:22.023 14:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:22.023 14:29:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:22.023 Running I/O for 10 seconds... 00:21:34.253 00:21:34.253 Latency(us) 00:21:34.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.253 Verification LBA range: start 0x0 length 0x2000 00:21:34.253 TLSTESTn1 : 10.09 4683.72 18.30 0.00 0.00 27228.69 5515.95 89128.96 00:21:34.253 =================================================================================================================== 00:21:34.253 Total : 4683.72 18.30 0.00 0.00 27228.69 5515.95 89128.96 00:21:34.253 0 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3074444 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3074444 ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3074444 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3074444 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3074444' 00:21:34.253 killing process with pid 3074444 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3074444 00:21:34.253 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.253 00:21:34.253 Latency(us) 00:21:34.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.253 =================================================================================================================== 00:21:34.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.253 [2024-06-10 14:30:09.795371] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3074444 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3074395 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3074395 ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3074395 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3074395 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3074395' 00:21:34.253 killing process with pid 3074395 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3074395 00:21:34.253 [2024-06-10 14:30:09.963008] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:34.253 14:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3074395 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3077227 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3077227 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3077227 ']' 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:34.253 14:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.253 [2024-06-10 14:30:10.164278] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:34.253 [2024-06-10 14:30:10.164345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.253 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.253 [2024-06-10 14:30:10.245842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.253 [2024-06-10 14:30:10.329570] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.253 [2024-06-10 14:30:10.329625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.253 [2024-06-10 14:30:10.329634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.253 [2024-06-10 14:30:10.329647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.253 [2024-06-10 14:30:10.329654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.253 [2024-06-10 14:30:10.329680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.eqjKDGyyXj 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eqjKDGyyXj 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.253 [2024-06-10 14:30:11.286860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:34.253 [2024-06-10 14:30:11.719950] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.253 [2024-06-10 14:30:11.720255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.253 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:34.515 malloc0 00:21:34.515 14:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:34.775 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eqjKDGyyXj 00:21:35.035 [2024-06-10 14:30:12.371890] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3077718 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3077718 /var/tmp/bdevperf.sock 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3077718 ']' 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.035 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.035 [2024-06-10 14:30:12.439179] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:35.035 [2024-06-10 14:30:12.439245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077718 ] 00:21:35.035 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.035 [2024-06-10 14:30:12.501635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.035 [2024-06-10 14:30:12.575441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.296 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:35.296 14:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:35.296 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eqjKDGyyXj 00:21:35.296 14:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:35.556 [2024-06-10 14:30:13.029582] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.556 nvme0n1 00:21:35.556 14:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.817 Running I/O for 1 seconds... 00:21:36.758 00:21:36.758 Latency(us) 00:21:36.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.758 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:36.758 Verification LBA range: start 0x0 length 0x2000 00:21:36.758 nvme0n1 : 1.01 4624.03 18.06 0.00 0.00 27439.32 6498.99 29709.65 00:21:36.758 =================================================================================================================== 00:21:36.758 Total : 4624.03 18.06 0.00 0.00 27439.32 6498.99 29709.65 00:21:36.758 0 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3077718 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3077718 ']' 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3077718 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3077718 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3077718' 00:21:36.758 killing process with pid 3077718 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3077718 00:21:36.758 Received shutdown signal, test time was about 1.000000 seconds 00:21:36.758 00:21:36.758 Latency(us) 00:21:36.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.758 =================================================================================================================== 00:21:36.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.758 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3077718 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3077227 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3077227 ']' 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3077227 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3077227 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3077227' 00:21:37.019 killing process with pid 3077227 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3077227 00:21:37.019 [2024-06-10 14:30:14.515025] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:37.019 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3077227 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3078205 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3078205 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3078205 ']' 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:37.280 14:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.280 [2024-06-10 14:30:14.728907] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:37.280 [2024-06-10 14:30:14.728961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.280 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.280 [2024-06-10 14:30:14.811700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.542 [2024-06-10 14:30:14.875618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.542 [2024-06-10 14:30:14.875652] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.542 [2024-06-10 14:30:14.875659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.542 [2024-06-10 14:30:14.875666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.542 [2024-06-10 14:30:14.875671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.542 [2024-06-10 14:30:14.875691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.115 [2024-06-10 14:30:15.642730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.115 malloc0 00:21:38.115 [2024-06-10 14:30:15.672884] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.115 [2024-06-10 14:30:15.673191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3078416 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3078416 /var/tmp/bdevperf.sock 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3078416 ']' 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:38.115 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.375 [2024-06-10 14:30:15.758870] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:38.375 [2024-06-10 14:30:15.758943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078416 ] 00:21:38.375 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.375 [2024-06-10 14:30:15.823001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.375 [2024-06-10 14:30:15.896386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.636 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:38.636 14:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:38.636 14:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eqjKDGyyXj 00:21:38.636 14:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:38.897 [2024-06-10 14:30:16.366178] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.897 nvme0n1 00:21:38.897 14:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.158 Running I/O for 1 seconds... 00:21:40.099 00:21:40.099 Latency(us) 00:21:40.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:40.099 Verification LBA range: start 0x0 length 0x2000 00:21:40.100 nvme0n1 : 1.01 4183.50 16.34 0.00 0.00 30370.58 5352.11 55924.05 00:21:40.100 =================================================================================================================== 00:21:40.100 Total : 4183.50 16.34 0.00 0.00 30370.58 5352.11 55924.05 00:21:40.100 0 00:21:40.100 14:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:40.100 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.100 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.361 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.361 14:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:40.361 "subsystems": [ 00:21:40.361 { 00:21:40.361 "subsystem": "keyring", 00:21:40.361 "config": [ 00:21:40.361 { 00:21:40.361 "method": "keyring_file_add_key", 00:21:40.361 "params": { 00:21:40.361 "name": "key0", 00:21:40.361 "path": "/tmp/tmp.eqjKDGyyXj" 00:21:40.361 } 00:21:40.361 } 00:21:40.361 ] 00:21:40.361 }, 00:21:40.361 { 00:21:40.361 "subsystem": "iobuf", 00:21:40.361 "config": [ 00:21:40.361 { 00:21:40.361 "method": "iobuf_set_options", 00:21:40.361 "params": { 00:21:40.361 "small_pool_count": 8192, 00:21:40.361 "large_pool_count": 1024, 00:21:40.361 "small_bufsize": 8192, 00:21:40.361 "large_bufsize": 135168 00:21:40.361 } 00:21:40.361 } 00:21:40.361 ] 00:21:40.361 }, 00:21:40.361 { 00:21:40.361 "subsystem": "sock", 00:21:40.361 "config": [ 00:21:40.362 { 00:21:40.362 "method": "sock_set_default_impl", 00:21:40.362 "params": { 00:21:40.362 "impl_name": "posix" 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "sock_impl_set_options", 00:21:40.362 "params": { 00:21:40.362 "impl_name": "ssl", 00:21:40.362 "recv_buf_size": 4096, 00:21:40.362 "send_buf_size": 4096, 00:21:40.362 "enable_recv_pipe": true, 00:21:40.362 "enable_quickack": false, 00:21:40.362 "enable_placement_id": 0, 00:21:40.362 "enable_zerocopy_send_server": true, 00:21:40.362 "enable_zerocopy_send_client": false, 00:21:40.362 "zerocopy_threshold": 0, 00:21:40.362 "tls_version": 0, 00:21:40.362 "enable_ktls": false 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "sock_impl_set_options", 00:21:40.362 "params": { 00:21:40.362 "impl_name": "posix", 00:21:40.362 "recv_buf_size": 2097152, 00:21:40.362 "send_buf_size": 2097152, 00:21:40.362 "enable_recv_pipe": true, 00:21:40.362 "enable_quickack": false, 00:21:40.362 "enable_placement_id": 0, 00:21:40.362 "enable_zerocopy_send_server": true, 00:21:40.362 "enable_zerocopy_send_client": false, 00:21:40.362 "zerocopy_threshold": 0, 00:21:40.362 "tls_version": 0, 00:21:40.362 "enable_ktls": false 00:21:40.362 } 00:21:40.362 } 00:21:40.362 ] 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "subsystem": "vmd", 00:21:40.362 "config": [] 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "subsystem": "accel", 00:21:40.362 "config": [ 00:21:40.362 { 00:21:40.362 "method": "accel_set_options", 00:21:40.362 "params": { 00:21:40.362 "small_cache_size": 128, 00:21:40.362 "large_cache_size": 16, 00:21:40.362 "task_count": 2048, 00:21:40.362 "sequence_count": 2048, 00:21:40.362 "buf_count": 2048 00:21:40.362 } 00:21:40.362 } 00:21:40.362 ] 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "subsystem": "bdev", 00:21:40.362 "config": [ 00:21:40.362 { 00:21:40.362 "method": "bdev_set_options", 00:21:40.362 "params": { 00:21:40.362 "bdev_io_pool_size": 65535, 00:21:40.362 "bdev_io_cache_size": 256, 00:21:40.362 "bdev_auto_examine": true, 00:21:40.362 "iobuf_small_cache_size": 128, 00:21:40.362 "iobuf_large_cache_size": 16 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_raid_set_options", 00:21:40.362 "params": { 00:21:40.362 "process_window_size_kb": 1024 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_iscsi_set_options", 00:21:40.362 "params": { 00:21:40.362 "timeout_sec": 30 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_nvme_set_options", 00:21:40.362 "params": { 00:21:40.362 "action_on_timeout": "none", 00:21:40.362 "timeout_us": 0, 00:21:40.362 "timeout_admin_us": 0, 00:21:40.362 "keep_alive_timeout_ms": 10000, 00:21:40.362 "arbitration_burst": 0, 00:21:40.362 "low_priority_weight": 0, 00:21:40.362 "medium_priority_weight": 0, 00:21:40.362 "high_priority_weight": 0, 00:21:40.362 "nvme_adminq_poll_period_us": 10000, 00:21:40.362 "nvme_ioq_poll_period_us": 0, 00:21:40.362 "io_queue_requests": 0, 00:21:40.362 "delay_cmd_submit": true, 00:21:40.362 "transport_retry_count": 4, 00:21:40.362 "bdev_retry_count": 3, 00:21:40.362 "transport_ack_timeout": 0, 00:21:40.362 "ctrlr_loss_timeout_sec": 0, 00:21:40.362 "reconnect_delay_sec": 0, 00:21:40.362 "fast_io_fail_timeout_sec": 0, 00:21:40.362 "disable_auto_failback": false, 00:21:40.362 "generate_uuids": false, 00:21:40.362 "transport_tos": 0, 00:21:40.362 "nvme_error_stat": false, 00:21:40.362 "rdma_srq_size": 0, 00:21:40.362 "io_path_stat": false, 00:21:40.362 "allow_accel_sequence": false, 00:21:40.362 "rdma_max_cq_size": 0, 00:21:40.362 "rdma_cm_event_timeout_ms": 0, 00:21:40.362 "dhchap_digests": [ 00:21:40.362 "sha256", 00:21:40.362 "sha384", 00:21:40.362 "sha512" 00:21:40.362 ], 00:21:40.362 "dhchap_dhgroups": [ 00:21:40.362 "null", 00:21:40.362 "ffdhe2048", 00:21:40.362 "ffdhe3072", 00:21:40.362 "ffdhe4096", 00:21:40.362 "ffdhe6144", 00:21:40.362 "ffdhe8192" 00:21:40.362 ] 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_nvme_set_hotplug", 00:21:40.362 "params": { 00:21:40.362 "period_us": 100000, 00:21:40.362 "enable": false 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_malloc_create", 00:21:40.362 "params": { 00:21:40.362 "name": "malloc0", 00:21:40.362 "num_blocks": 8192, 00:21:40.362 "block_size": 4096, 00:21:40.362 "physical_block_size": 4096, 00:21:40.362 "uuid": "61ea7867-8b24-4d6e-aadf-3a9c6cb0a4e2", 00:21:40.362 "optimal_io_boundary": 0 00:21:40.362 } 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "method": "bdev_wait_for_examine" 00:21:40.362 } 00:21:40.362 ] 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "subsystem": "nbd", 00:21:40.362 "config": [] 00:21:40.362 }, 00:21:40.362 { 00:21:40.362 "subsystem": "scheduler", 00:21:40.362 "config": [ 00:21:40.363 { 00:21:40.363 "method": "framework_set_scheduler", 00:21:40.363 "params": { 00:21:40.363 "name": "static" 00:21:40.363 } 00:21:40.363 } 00:21:40.363 ] 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "subsystem": "nvmf", 00:21:40.363 "config": [ 00:21:40.363 { 00:21:40.363 "method": "nvmf_set_config", 00:21:40.363 "params": { 00:21:40.363 "discovery_filter": "match_any", 00:21:40.363 "admin_cmd_passthru": { 00:21:40.363 "identify_ctrlr": false 00:21:40.363 } 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_set_max_subsystems", 00:21:40.363 "params": { 00:21:40.363 "max_subsystems": 1024 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_set_crdt", 00:21:40.363 "params": { 00:21:40.363 "crdt1": 0, 00:21:40.363 "crdt2": 0, 00:21:40.363 "crdt3": 0 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_create_transport", 00:21:40.363 "params": { 00:21:40.363 "trtype": "TCP", 00:21:40.363 "max_queue_depth": 128, 00:21:40.363 "max_io_qpairs_per_ctrlr": 127, 00:21:40.363 "in_capsule_data_size": 4096, 00:21:40.363 "max_io_size": 131072, 00:21:40.363 "io_unit_size": 131072, 00:21:40.363 "max_aq_depth": 128, 00:21:40.363 "num_shared_buffers": 511, 00:21:40.363 "buf_cache_size": 4294967295, 00:21:40.363 "dif_insert_or_strip": false, 00:21:40.363 "zcopy": false, 00:21:40.363 "c2h_success": false, 00:21:40.363 "sock_priority": 0, 00:21:40.363 "abort_timeout_sec": 1, 00:21:40.363 "ack_timeout": 0, 00:21:40.363 "data_wr_pool_size": 0 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_create_subsystem", 00:21:40.363 "params": { 00:21:40.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.363 "allow_any_host": false, 00:21:40.363 "serial_number": "00000000000000000000", 00:21:40.363 "model_number": "SPDK bdev Controller", 00:21:40.363 "max_namespaces": 32, 00:21:40.363 "min_cntlid": 1, 00:21:40.363 "max_cntlid": 65519, 00:21:40.363 "ana_reporting": false 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_subsystem_add_host", 00:21:40.363 "params": { 00:21:40.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.363 "host": "nqn.2016-06.io.spdk:host1", 00:21:40.363 "psk": "key0" 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_subsystem_add_ns", 00:21:40.363 "params": { 00:21:40.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.363 "namespace": { 00:21:40.363 "nsid": 1, 00:21:40.363 "bdev_name": "malloc0", 00:21:40.363 "nguid": "61EA78678B244D6EAADF3A9C6CB0A4E2", 00:21:40.363 "uuid": "61ea7867-8b24-4d6e-aadf-3a9c6cb0a4e2", 00:21:40.363 "no_auto_visible": false 00:21:40.363 } 00:21:40.363 } 00:21:40.363 }, 00:21:40.363 { 00:21:40.363 "method": "nvmf_subsystem_add_listener", 00:21:40.363 "params": { 00:21:40.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.363 "listen_address": { 00:21:40.363 "trtype": "TCP", 00:21:40.363 "adrfam": "IPv4", 00:21:40.363 "traddr": "10.0.0.2", 00:21:40.363 "trsvcid": "4420" 00:21:40.363 }, 00:21:40.363 "secure_channel": true 00:21:40.363 } 00:21:40.363 } 00:21:40.363 ] 00:21:40.363 } 00:21:40.363 ] 00:21:40.363 }' 00:21:40.363 14:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:40.686 14:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:40.686 "subsystems": [ 00:21:40.686 { 00:21:40.686 "subsystem": "keyring", 00:21:40.686 "config": [ 00:21:40.686 { 00:21:40.686 "method": "keyring_file_add_key", 00:21:40.686 "params": { 00:21:40.686 "name": "key0", 00:21:40.686 "path": "/tmp/tmp.eqjKDGyyXj" 00:21:40.686 } 00:21:40.686 } 00:21:40.686 ] 00:21:40.686 }, 00:21:40.686 { 00:21:40.686 "subsystem": "iobuf", 00:21:40.686 "config": [ 00:21:40.686 { 00:21:40.686 "method": "iobuf_set_options", 00:21:40.686 "params": { 00:21:40.686 "small_pool_count": 8192, 00:21:40.686 "large_pool_count": 1024, 00:21:40.686 "small_bufsize": 8192, 00:21:40.686 "large_bufsize": 135168 00:21:40.686 } 00:21:40.686 } 00:21:40.686 ] 00:21:40.686 }, 00:21:40.686 { 00:21:40.686 "subsystem": "sock", 00:21:40.686 "config": [ 00:21:40.686 { 00:21:40.686 "method": "sock_set_default_impl", 00:21:40.686 "params": { 00:21:40.686 "impl_name": "posix" 00:21:40.686 } 00:21:40.686 }, 00:21:40.686 { 00:21:40.686 "method": "sock_impl_set_options", 00:21:40.686 "params": { 00:21:40.686 "impl_name": "ssl", 00:21:40.686 "recv_buf_size": 4096, 00:21:40.686 "send_buf_size": 4096, 00:21:40.686 "enable_recv_pipe": true, 00:21:40.686 "enable_quickack": false, 00:21:40.686 "enable_placement_id": 0, 00:21:40.686 "enable_zerocopy_send_server": true, 00:21:40.686 "enable_zerocopy_send_client": false, 00:21:40.686 "zerocopy_threshold": 0, 00:21:40.686 "tls_version": 0, 00:21:40.686 "enable_ktls": false 00:21:40.686 } 00:21:40.686 }, 00:21:40.686 { 00:21:40.686 "method": "sock_impl_set_options", 00:21:40.686 "params": { 00:21:40.686 "impl_name": "posix", 00:21:40.686 "recv_buf_size": 2097152, 00:21:40.686 "send_buf_size": 2097152, 00:21:40.686 "enable_recv_pipe": true, 00:21:40.686 "enable_quickack": false, 00:21:40.686 "enable_placement_id": 0, 00:21:40.686 "enable_zerocopy_send_server": true, 00:21:40.686 "enable_zerocopy_send_client": false, 00:21:40.686 "zerocopy_threshold": 0, 00:21:40.687 "tls_version": 0, 00:21:40.687 "enable_ktls": false 00:21:40.687 } 00:21:40.687 } 00:21:40.687 ] 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "subsystem": "vmd", 00:21:40.687 "config": [] 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "subsystem": "accel", 00:21:40.687 "config": [ 00:21:40.687 { 00:21:40.687 "method": "accel_set_options", 00:21:40.687 "params": { 00:21:40.687 "small_cache_size": 128, 00:21:40.687 "large_cache_size": 16, 00:21:40.687 "task_count": 2048, 00:21:40.687 "sequence_count": 2048, 00:21:40.687 "buf_count": 2048 00:21:40.687 } 00:21:40.687 } 00:21:40.687 ] 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "subsystem": "bdev", 00:21:40.687 "config": [ 00:21:40.687 { 00:21:40.687 "method": "bdev_set_options", 00:21:40.687 "params": { 00:21:40.687 "bdev_io_pool_size": 65535, 00:21:40.687 "bdev_io_cache_size": 256, 00:21:40.687 "bdev_auto_examine": true, 00:21:40.687 "iobuf_small_cache_size": 128, 00:21:40.687 "iobuf_large_cache_size": 16 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_raid_set_options", 00:21:40.687 "params": { 00:21:40.687 "process_window_size_kb": 1024 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_iscsi_set_options", 00:21:40.687 "params": { 00:21:40.687 "timeout_sec": 30 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_nvme_set_options", 00:21:40.687 "params": { 00:21:40.687 "action_on_timeout": "none", 00:21:40.687 "timeout_us": 0, 00:21:40.687 "timeout_admin_us": 0, 00:21:40.687 "keep_alive_timeout_ms": 10000, 00:21:40.687 "arbitration_burst": 0, 00:21:40.687 "low_priority_weight": 0, 00:21:40.687 "medium_priority_weight": 0, 00:21:40.687 "high_priority_weight": 0, 00:21:40.687 "nvme_adminq_poll_period_us": 10000, 00:21:40.687 "nvme_ioq_poll_period_us": 0, 00:21:40.687 "io_queue_requests": 512, 00:21:40.687 "delay_cmd_submit": true, 00:21:40.687 "transport_retry_count": 4, 00:21:40.687 "bdev_retry_count": 3, 00:21:40.687 "transport_ack_timeout": 0, 00:21:40.687 "ctrlr_loss_timeout_sec": 0, 00:21:40.687 "reconnect_delay_sec": 0, 00:21:40.687 "fast_io_fail_timeout_sec": 0, 00:21:40.687 "disable_auto_failback": false, 00:21:40.687 "generate_uuids": false, 00:21:40.687 "transport_tos": 0, 00:21:40.687 "nvme_error_stat": false, 00:21:40.687 "rdma_srq_size": 0, 00:21:40.687 "io_path_stat": false, 00:21:40.687 "allow_accel_sequence": false, 00:21:40.687 "rdma_max_cq_size": 0, 00:21:40.687 "rdma_cm_event_timeout_ms": 0, 00:21:40.687 "dhchap_digests": [ 00:21:40.687 "sha256", 00:21:40.687 "sha384", 00:21:40.687 "sha512" 00:21:40.687 ], 00:21:40.687 "dhchap_dhgroups": [ 00:21:40.687 "null", 00:21:40.687 "ffdhe2048", 00:21:40.687 "ffdhe3072", 00:21:40.687 "ffdhe4096", 00:21:40.687 "ffdhe6144", 00:21:40.687 "ffdhe8192" 00:21:40.687 ] 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_nvme_attach_controller", 00:21:40.687 "params": { 00:21:40.687 "name": "nvme0", 00:21:40.687 "trtype": "TCP", 00:21:40.687 "adrfam": "IPv4", 00:21:40.687 "traddr": "10.0.0.2", 00:21:40.687 "trsvcid": "4420", 00:21:40.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.687 "prchk_reftag": false, 00:21:40.687 "prchk_guard": false, 00:21:40.687 "ctrlr_loss_timeout_sec": 0, 00:21:40.687 "reconnect_delay_sec": 0, 00:21:40.687 "fast_io_fail_timeout_sec": 0, 00:21:40.687 "psk": "key0", 00:21:40.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.687 "hdgst": false, 00:21:40.687 "ddgst": false 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_nvme_set_hotplug", 00:21:40.687 "params": { 00:21:40.687 "period_us": 100000, 00:21:40.687 "enable": false 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_enable_histogram", 00:21:40.687 "params": { 00:21:40.687 "name": "nvme0n1", 00:21:40.687 "enable": true 00:21:40.687 } 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "method": "bdev_wait_for_examine" 00:21:40.687 } 00:21:40.687 ] 00:21:40.687 }, 00:21:40.687 { 00:21:40.687 "subsystem": "nbd", 00:21:40.687 "config": [] 00:21:40.687 } 00:21:40.687 ] 00:21:40.687 }' 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3078416 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3078416 ']' 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3078416 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:40.687 14:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3078416 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3078416' 00:21:40.687 killing process with pid 3078416 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3078416 00:21:40.687 Received shutdown signal, test time was about 1.000000 seconds 00:21:40.687 00:21:40.687 Latency(us) 00:21:40.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.687 =================================================================================================================== 00:21:40.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3078416 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3078205 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3078205 ']' 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3078205 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3078205 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3078205' 00:21:40.687 killing process with pid 3078205 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3078205 00:21:40.687 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3078205 00:21:40.950 14:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:40.950 14:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.950 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:40.950 14:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:40.950 "subsystems": [ 00:21:40.950 { 00:21:40.950 "subsystem": "keyring", 00:21:40.950 "config": [ 00:21:40.950 { 00:21:40.950 "method": "keyring_file_add_key", 00:21:40.950 "params": { 00:21:40.950 "name": "key0", 00:21:40.950 "path": "/tmp/tmp.eqjKDGyyXj" 00:21:40.950 } 00:21:40.950 } 00:21:40.950 ] 00:21:40.950 }, 00:21:40.950 { 00:21:40.950 "subsystem": "iobuf", 00:21:40.950 "config": [ 00:21:40.950 { 00:21:40.950 "method": "iobuf_set_options", 00:21:40.950 "params": { 00:21:40.950 "small_pool_count": 8192, 00:21:40.950 "large_pool_count": 1024, 00:21:40.950 "small_bufsize": 8192, 00:21:40.950 "large_bufsize": 135168 00:21:40.950 } 00:21:40.950 } 00:21:40.950 ] 00:21:40.950 }, 00:21:40.950 { 00:21:40.950 "subsystem": "sock", 00:21:40.950 "config": [ 00:21:40.950 { 00:21:40.950 "method": "sock_set_default_impl", 00:21:40.950 "params": { 00:21:40.950 "impl_name": "posix" 00:21:40.950 } 00:21:40.950 }, 00:21:40.950 { 00:21:40.950 "method": "sock_impl_set_options", 00:21:40.950 "params": { 00:21:40.950 "impl_name": "ssl", 00:21:40.950 "recv_buf_size": 4096, 00:21:40.950 "send_buf_size": 4096, 00:21:40.951 "enable_recv_pipe": true, 00:21:40.951 "enable_quickack": false, 00:21:40.951 "enable_placement_id": 0, 00:21:40.951 "enable_zerocopy_send_server": true, 00:21:40.951 "enable_zerocopy_send_client": false, 00:21:40.951 "zerocopy_threshold": 0, 00:21:40.951 "tls_version": 0, 00:21:40.951 "enable_ktls": false 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "sock_impl_set_options", 00:21:40.951 "params": { 00:21:40.951 "impl_name": "posix", 00:21:40.951 "recv_buf_size": 2097152, 00:21:40.951 "send_buf_size": 2097152, 00:21:40.951 "enable_recv_pipe": true, 00:21:40.951 "enable_quickack": false, 00:21:40.951 "enable_placement_id": 0, 00:21:40.951 "enable_zerocopy_send_server": true, 00:21:40.951 "enable_zerocopy_send_client": false, 00:21:40.951 "zerocopy_threshold": 0, 00:21:40.951 "tls_version": 0, 00:21:40.951 "enable_ktls": false 00:21:40.951 } 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "vmd", 00:21:40.951 "config": [] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "accel", 00:21:40.951 "config": [ 00:21:40.951 { 00:21:40.951 "method": "accel_set_options", 00:21:40.951 "params": { 00:21:40.951 "small_cache_size": 128, 00:21:40.951 "large_cache_size": 16, 00:21:40.951 "task_count": 2048, 00:21:40.951 "sequence_count": 2048, 00:21:40.951 "buf_count": 2048 00:21:40.951 } 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "bdev", 00:21:40.951 "config": [ 00:21:40.951 { 00:21:40.951 "method": "bdev_set_options", 00:21:40.951 "params": { 00:21:40.951 "bdev_io_pool_size": 65535, 00:21:40.951 "bdev_io_cache_size": 256, 00:21:40.951 "bdev_auto_examine": true, 00:21:40.951 "iobuf_small_cache_size": 128, 00:21:40.951 "iobuf_large_cache_size": 16 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_raid_set_options", 00:21:40.951 "params": { 00:21:40.951 "process_window_size_kb": 1024 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_iscsi_set_options", 00:21:40.951 "params": { 00:21:40.951 "timeout_sec": 30 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_nvme_set_options", 00:21:40.951 "params": { 00:21:40.951 "action_on_timeout": "none", 00:21:40.951 "timeout_us": 0, 00:21:40.951 "timeout_admin_us": 0, 00:21:40.951 "keep_alive_timeout_ms": 10000, 00:21:40.951 "arbitration_burst": 0, 00:21:40.951 "low_priority_weight": 0, 00:21:40.951 "medium_priority_weight": 0, 00:21:40.951 "high_priority_weight": 0, 00:21:40.951 "nvme_adminq_poll_period_us": 10000, 00:21:40.951 "nvme_ioq_poll_period_us": 0, 00:21:40.951 "io_queue_requests": 0, 00:21:40.951 "delay_cmd_submit": true, 00:21:40.951 "transport_retry_count": 4, 00:21:40.951 "bdev_retry_count": 3, 00:21:40.951 "transport_ack_timeout": 0, 00:21:40.951 "ctrlr_loss_timeout_sec": 0, 00:21:40.951 "reconnect_delay_sec": 0, 00:21:40.951 "fast_io_fail_timeout_sec": 0, 00:21:40.951 "disable_auto_failback": false, 00:21:40.951 "generate_uuids": false, 00:21:40.951 "transport_tos": 0, 00:21:40.951 "nvme_error_stat": false, 00:21:40.951 "rdma_srq_size": 0, 00:21:40.951 "io_path_stat": false, 00:21:40.951 "allow_accel_sequence": false, 00:21:40.951 "rdma_max_cq_size": 0, 00:21:40.951 "rdma_cm_event_timeout_ms": 0, 00:21:40.951 "dhchap_digests": [ 00:21:40.951 "sha256", 00:21:40.951 "sha384", 00:21:40.951 "sha512" 00:21:40.951 ], 00:21:40.951 "dhchap_dhgroups": [ 00:21:40.951 "null", 00:21:40.951 "ffdhe2048", 00:21:40.951 "ffdhe3072", 00:21:40.951 "ffdhe4096", 00:21:40.951 "ffdhe6144", 00:21:40.951 "ffdhe8192" 00:21:40.951 ] 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_nvme_set_hotplug", 00:21:40.951 "params": { 00:21:40.951 "period_us": 100000, 00:21:40.951 "enable": false 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_malloc_create", 00:21:40.951 "params": { 00:21:40.951 "name": "malloc0", 00:21:40.951 "num_blocks": 8192, 00:21:40.951 "block_size": 4096, 00:21:40.951 "physical_block_size": 4096, 00:21:40.951 "uuid": "61ea7867-8b24-4d6e-aadf-3a9c6cb0a4e2", 00:21:40.951 "optimal_io_boundary": 0 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "bdev_wait_for_examine" 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "nbd", 00:21:40.951 "config": [] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "scheduler", 00:21:40.951 "config": [ 00:21:40.951 { 00:21:40.951 "method": "framework_set_scheduler", 00:21:40.951 "params": { 00:21:40.951 "name": "static" 00:21:40.951 } 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "subsystem": "nvmf", 00:21:40.951 "config": [ 00:21:40.951 { 00:21:40.951 "method": "nvmf_set_config", 00:21:40.951 "params": { 00:21:40.951 "discovery_filter": "match_any", 00:21:40.951 "admin_cmd_passthru": { 00:21:40.951 "identify_ctrlr": false 00:21:40.951 } 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_set_max_subsystems", 00:21:40.951 "params": { 00:21:40.951 "max_subsystems": 1024 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_set_crdt", 00:21:40.951 "params": { 00:21:40.951 "crdt1": 0, 00:21:40.951 "crdt2": 0, 00:21:40.951 "crdt3": 0 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_create_transport", 00:21:40.951 "params": { 00:21:40.951 "trtype": "TCP", 00:21:40.951 "max_queue_depth": 128, 00:21:40.951 "max_io_qpairs_per_ctrlr": 127, 00:21:40.951 "in_capsule_data_size": 4096, 00:21:40.951 "max_io_size": 131072, 00:21:40.951 "io_unit_size": 131072, 00:21:40.951 "max_aq_depth": 128, 00:21:40.951 "num_shared_buffers": 511, 00:21:40.951 "buf_cache_size": 4294967295, 00:21:40.951 "dif_insert_or_strip": false, 00:21:40.951 "zcopy": false, 00:21:40.951 "c2h_success": false, 00:21:40.951 "sock_priority": 0, 00:21:40.951 "abort_timeout_sec": 1, 00:21:40.951 "ack_timeout": 0, 00:21:40.951 "data_wr_pool_size": 0 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_create_subsystem", 00:21:40.951 "params": { 00:21:40.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.951 "allow_any_host": false, 00:21:40.951 "serial_number": "00000000000000000000", 00:21:40.951 "model_number": "SPDK bdev Controller", 00:21:40.951 "max_namespaces": 32, 00:21:40.951 "min_cntlid": 1, 00:21:40.951 "max_cntlid": 65519, 00:21:40.951 "ana_reporting": false 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_subsystem_add_host", 00:21:40.951 "params": { 00:21:40.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.951 "host": "nqn.2016-06.io.spdk:host1", 00:21:40.951 "psk": "key0" 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_subsystem_add_ns", 00:21:40.951 "params": { 00:21:40.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.951 "namespace": { 00:21:40.951 "nsid": 1, 00:21:40.951 "bdev_name": "malloc0", 00:21:40.951 "nguid": "61EA78678B244D6EAADF3A9C6CB0A4E2", 00:21:40.951 "uuid": "61ea7867-8b24-4d6e-aadf-3a9c6cb0a4e2", 00:21:40.951 "no_auto_visible": false 00:21:40.951 } 00:21:40.951 } 00:21:40.951 }, 00:21:40.951 { 00:21:40.951 "method": "nvmf_subsystem_add_listener", 00:21:40.951 "params": { 00:21:40.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.951 "listen_address": { 00:21:40.951 "trtype": "TCP", 00:21:40.951 "adrfam": "IPv4", 00:21:40.951 "traddr": "10.0.0.2", 00:21:40.951 "trsvcid": "4420" 00:21:40.951 }, 00:21:40.951 "secure_channel": true 00:21:40.951 } 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 } 00:21:40.951 ] 00:21:40.951 }' 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3079072 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3079072 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3079072 ']' 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:40.951 14:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.951 [2024-06-10 14:30:18.431057] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:40.951 [2024-06-10 14:30:18.431110] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.951 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.952 [2024-06-10 14:30:18.511464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.212 [2024-06-10 14:30:18.574264] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.212 [2024-06-10 14:30:18.574299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.212 [2024-06-10 14:30:18.574306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.212 [2024-06-10 14:30:18.574313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.212 [2024-06-10 14:30:18.574323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.212 [2024-06-10 14:30:18.574375] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.212 [2024-06-10 14:30:18.771341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.212 [2024-06-10 14:30:18.803344] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.471 [2024-06-10 14:30:18.819610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.731 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:41.731 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:41.731 14:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.731 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:41.731 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3079130 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3079130 /var/tmp/bdevperf.sock 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 3079130 ']' 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 14:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:41.991 "subsystems": [ 00:21:41.991 { 00:21:41.991 "subsystem": "keyring", 00:21:41.991 "config": [ 00:21:41.991 { 00:21:41.991 "method": "keyring_file_add_key", 00:21:41.991 "params": { 00:21:41.991 "name": "key0", 00:21:41.991 "path": "/tmp/tmp.eqjKDGyyXj" 00:21:41.991 } 00:21:41.991 } 00:21:41.991 ] 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "subsystem": "iobuf", 00:21:41.991 "config": [ 00:21:41.991 { 00:21:41.991 "method": "iobuf_set_options", 00:21:41.991 "params": { 00:21:41.991 "small_pool_count": 8192, 00:21:41.991 "large_pool_count": 1024, 00:21:41.991 "small_bufsize": 8192, 00:21:41.991 "large_bufsize": 135168 00:21:41.991 } 00:21:41.991 } 00:21:41.991 ] 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "subsystem": "sock", 00:21:41.991 "config": [ 00:21:41.991 { 00:21:41.991 "method": "sock_set_default_impl", 00:21:41.991 "params": { 00:21:41.991 "impl_name": "posix" 00:21:41.991 } 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "method": "sock_impl_set_options", 00:21:41.991 "params": { 00:21:41.991 "impl_name": "ssl", 00:21:41.991 "recv_buf_size": 4096, 00:21:41.991 "send_buf_size": 4096, 00:21:41.991 "enable_recv_pipe": true, 00:21:41.991 "enable_quickack": false, 00:21:41.991 "enable_placement_id": 0, 00:21:41.991 "enable_zerocopy_send_server": true, 00:21:41.991 "enable_zerocopy_send_client": false, 00:21:41.991 "zerocopy_threshold": 0, 00:21:41.991 "tls_version": 0, 00:21:41.991 "enable_ktls": false 00:21:41.991 } 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "method": "sock_impl_set_options", 00:21:41.991 "params": { 00:21:41.991 "impl_name": "posix", 00:21:41.991 "recv_buf_size": 2097152, 00:21:41.991 "send_buf_size": 2097152, 00:21:41.991 "enable_recv_pipe": true, 00:21:41.991 "enable_quickack": false, 00:21:41.991 "enable_placement_id": 0, 00:21:41.991 "enable_zerocopy_send_server": true, 00:21:41.991 "enable_zerocopy_send_client": false, 00:21:41.991 "zerocopy_threshold": 0, 00:21:41.991 "tls_version": 0, 00:21:41.991 "enable_ktls": false 00:21:41.991 } 00:21:41.991 } 00:21:41.991 ] 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "subsystem": "vmd", 00:21:41.991 "config": [] 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "subsystem": "accel", 00:21:41.991 "config": [ 00:21:41.991 { 00:21:41.991 "method": "accel_set_options", 00:21:41.991 "params": { 00:21:41.991 "small_cache_size": 128, 00:21:41.991 "large_cache_size": 16, 00:21:41.991 "task_count": 2048, 00:21:41.991 "sequence_count": 2048, 00:21:41.991 "buf_count": 2048 00:21:41.991 } 00:21:41.991 } 00:21:41.991 ] 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "subsystem": "bdev", 00:21:41.991 "config": [ 00:21:41.991 { 00:21:41.991 "method": "bdev_set_options", 00:21:41.991 "params": { 00:21:41.991 "bdev_io_pool_size": 65535, 00:21:41.991 "bdev_io_cache_size": 256, 00:21:41.991 "bdev_auto_examine": true, 00:21:41.991 "iobuf_small_cache_size": 128, 00:21:41.991 "iobuf_large_cache_size": 16 00:21:41.991 } 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "method": "bdev_raid_set_options", 00:21:41.991 "params": { 00:21:41.991 "process_window_size_kb": 1024 00:21:41.991 } 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "method": "bdev_iscsi_set_options", 00:21:41.991 "params": { 00:21:41.991 "timeout_sec": 30 00:21:41.991 } 00:21:41.991 }, 00:21:41.991 { 00:21:41.991 "method": "bdev_nvme_set_options", 00:21:41.991 "params": { 00:21:41.991 "action_on_timeout": "none", 00:21:41.991 "timeout_us": 0, 00:21:41.991 "timeout_admin_us": 0, 00:21:41.991 "keep_alive_timeout_ms": 10000, 00:21:41.991 "arbitration_burst": 0, 00:21:41.991 "low_priority_weight": 0, 00:21:41.991 "medium_priority_weight": 0, 00:21:41.991 "high_priority_weight": 0, 00:21:41.991 "nvme_adminq_poll_period_us": 10000, 00:21:41.991 "nvme_ioq_poll_period_us": 0, 00:21:41.991 "io_queue_requests": 512, 00:21:41.991 "delay_cmd_submit": true, 00:21:41.991 "transport_retry_count": 4, 00:21:41.991 "bdev_retry_count": 3, 00:21:41.991 "transport_ack_timeout": 0, 00:21:41.991 "ctrlr_loss_timeout_sec": 0, 00:21:41.991 "reconnect_delay_sec": 0, 00:21:41.991 "fast_io_fail_timeout_sec": 0, 00:21:41.991 "disable_auto_failback": false, 00:21:41.991 "generate_uuids": false, 00:21:41.991 "transport_tos": 0, 00:21:41.991 "nvme_error_stat": false, 00:21:41.991 "rdma_srq_size": 0, 00:21:41.991 "io_path_stat": false, 00:21:41.991 "allow_accel_sequence": false, 00:21:41.991 "rdma_max_cq_size": 0, 00:21:41.991 "rdma_cm_event_timeout_ms": 0, 00:21:41.991 "dhchap_digests": [ 00:21:41.991 "sha256", 00:21:41.992 "sha384", 00:21:41.992 "sha512" 00:21:41.992 ], 00:21:41.992 "dhchap_dhgroups": [ 00:21:41.992 "null", 00:21:41.992 "ffdhe2048", 00:21:41.992 "ffdhe3072", 00:21:41.992 "ffdhe4096", 00:21:41.992 "ffdhe6144", 00:21:41.992 "ffdhe8192" 00:21:41.992 ] 00:21:41.992 } 00:21:41.992 }, 00:21:41.992 { 00:21:41.992 "method": "bdev_nvme_attach_controller", 00:21:41.992 "params": { 00:21:41.992 "name": "nvme0", 00:21:41.992 "trtype": "TCP", 00:21:41.992 "adrfam": "IPv4", 00:21:41.992 "traddr": "10.0.0.2", 00:21:41.992 "trsvcid": "4420", 00:21:41.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.992 "prchk_reftag": false, 00:21:41.992 "prchk_guard": false, 00:21:41.992 "ctrlr_loss_timeout_sec": 0, 00:21:41.992 "reconnect_delay_sec": 0, 00:21:41.992 "fast_io_fail_timeout_sec": 0, 00:21:41.992 "psk": "key0", 00:21:41.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.992 "hdgst": false, 00:21:41.992 "ddgst": false 00:21:41.992 } 00:21:41.992 }, 00:21:41.992 { 00:21:41.992 "method": "bdev_nvme_set_hotplug", 00:21:41.992 "params": { 00:21:41.992 "period_us": 100000, 00:21:41.992 "enable": false 00:21:41.992 } 00:21:41.992 }, 00:21:41.992 { 00:21:41.992 "method": "bdev_enable_histogram", 00:21:41.992 "params": { 00:21:41.992 "name": "nvme0n1", 00:21:41.992 "enable": true 00:21:41.992 } 00:21:41.992 }, 00:21:41.992 { 00:21:41.992 "method": "bdev_wait_for_examine" 00:21:41.992 } 00:21:41.992 ] 00:21:41.992 }, 00:21:41.992 { 00:21:41.992 "subsystem": "nbd", 00:21:41.992 "config": [] 00:21:41.992 } 00:21:41.992 ] 00:21:41.992 }' 00:21:41.992 [2024-06-10 14:30:19.377392] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:41.992 [2024-06-10 14:30:19.377442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079130 ] 00:21:41.992 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.992 [2024-06-10 14:30:19.434474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.992 [2024-06-10 14:30:19.499303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.252 [2024-06-10 14:30:19.637920] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.823 14:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:42.823 14:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:42.823 14:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.823 14:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:43.084 14:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.084 14:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.084 Running I/O for 1 seconds... 00:21:44.025 00:21:44.025 Latency(us) 00:21:44.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.025 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:44.025 Verification LBA range: start 0x0 length 0x2000 00:21:44.025 nvme0n1 : 1.03 3729.53 14.57 0.00 0.00 33934.14 6225.92 90439.68 00:21:44.025 =================================================================================================================== 00:21:44.025 Total : 3729.53 14.57 0.00 0.00 33934.14 6225.92 90439.68 00:21:44.025 0 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:44.025 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.026 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:44.026 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:44.026 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:44.026 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.286 nvmf_trace.0 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3079130 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3079130 ']' 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3079130 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3079130 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3079130' 00:21:44.286 killing process with pid 3079130 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3079130 00:21:44.286 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.286 00:21:44.286 Latency(us) 00:21:44.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.286 =================================================================================================================== 00:21:44.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.286 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3079130 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.547 rmmod nvme_tcp 00:21:44.547 rmmod nvme_fabrics 00:21:44.547 rmmod nvme_keyring 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3079072 ']' 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3079072 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 3079072 ']' 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 3079072 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:44.547 14:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3079072 00:21:44.547 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:44.547 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:44.547 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3079072' 00:21:44.547 killing process with pid 3079072 00:21:44.547 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 3079072 00:21:44.547 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 3079072 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.807 14:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.717 14:30:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.717 14:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hxCXtfNHT1 /tmp/tmp.poxRt7PD4c /tmp/tmp.eqjKDGyyXj 00:21:46.717 00:21:46.717 real 1m19.960s 00:21:46.717 user 2m4.561s 00:21:46.717 sys 0m25.654s 00:21:46.717 14:30:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:46.717 14:30:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.717 ************************************ 00:21:46.717 END TEST nvmf_tls 00:21:46.717 ************************************ 00:21:46.717 14:30:24 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:46.717 14:30:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:46.717 14:30:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:46.717 14:30:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.978 ************************************ 00:21:46.978 START TEST nvmf_fips 00:21:46.978 ************************************ 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:46.978 * Looking for test storage... 00:21:46.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.978 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:46.979 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:47.240 Error setting digest 00:21:47.240 00F221C56F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:47.240 00F221C56F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.240 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.241 14:30:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.383 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:21:55.384 00:21:55.384 --- 10.0.0.2 ping statistics --- 00:21:55.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.384 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:55.384 00:21:55.384 --- 10.0.0.1 ping statistics --- 00:21:55.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.384 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3083846 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3083846 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 3083846 ']' 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.384 14:30:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.384 [2024-06-10 14:30:31.901360] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:55.384 [2024-06-10 14:30:31.901433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.384 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.384 [2024-06-10 14:30:31.970585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.384 [2024-06-10 14:30:32.043641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.384 [2024-06-10 14:30:32.043675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.384 [2024-06-10 14:30:32.043686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.384 [2024-06-10 14:30:32.043693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.384 [2024-06-10 14:30:32.043698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.384 [2024-06-10 14:30:32.043716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.384 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.385 14:30:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.645 [2024-06-10 14:30:32.978944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.645 [2024-06-10 14:30:32.994942] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.645 [2024-06-10 14:30:32.995101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.645 [2024-06-10 14:30:33.021603] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:55.645 malloc0 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3084175 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3084175 /var/tmp/bdevperf.sock 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 3084175 ']' 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:55.645 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.645 [2024-06-10 14:30:33.102628] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:21:55.645 [2024-06-10 14:30:33.102681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084175 ] 00:21:55.645 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.645 [2024-06-10 14:30:33.152368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.645 [2024-06-10 14:30:33.204416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.905 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:55.905 14:30:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:55.905 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.905 [2024-06-10 14:30:33.471884] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.905 [2024-06-10 14:30:33.471952] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.166 TLSTESTn1 00:21:56.166 14:30:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.166 Running I/O for 10 seconds... 00:22:06.163 00:22:06.163 Latency(us) 00:22:06.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.163 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:06.163 Verification LBA range: start 0x0 length 0x2000 00:22:06.163 TLSTESTn1 : 10.03 4161.24 16.25 0.00 0.00 30706.30 4587.52 67283.63 00:22:06.163 =================================================================================================================== 00:22:06.163 Total : 4161.24 16.25 0.00 0.00 30706.30 4587.52 67283.63 00:22:06.163 0 00:22:06.163 14:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:06.163 14:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:06.163 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:22:06.163 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:22:06.163 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:06.424 nvmf_trace.0 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3084175 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 3084175 ']' 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 3084175 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3084175 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3084175' 00:22:06.424 killing process with pid 3084175 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 3084175 00:22:06.424 Received shutdown signal, test time was about 10.000000 seconds 00:22:06.424 00:22:06.424 Latency(us) 00:22:06.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.424 =================================================================================================================== 00:22:06.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.424 [2024-06-10 14:30:43.906375] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:06.424 14:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 3084175 00:22:06.424 14:30:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:06.424 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.424 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.685 rmmod nvme_tcp 00:22:06.685 rmmod nvme_fabrics 00:22:06.685 rmmod nvme_keyring 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3083846 ']' 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3083846 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 3083846 ']' 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 3083846 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3083846 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3083846' 00:22:06.685 killing process with pid 3083846 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 3083846 00:22:06.685 [2024-06-10 14:30:44.145405] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:06.685 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 3083846 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.946 14:30:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.858 14:30:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.858 14:30:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:08.858 00:22:08.858 real 0m22.046s 00:22:08.858 user 0m22.941s 00:22:08.858 sys 0m9.372s 00:22:08.858 14:30:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:08.858 14:30:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.858 ************************************ 00:22:08.858 END TEST nvmf_fips 00:22:08.858 ************************************ 00:22:08.858 14:30:46 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:08.858 14:30:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:08.858 14:30:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:08.858 14:30:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:08.858 14:30:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.858 14:30:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:15.471 14:30:52 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.472 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:15.472 14:30:52 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:15.472 14:30:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:15.472 14:30:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:15.472 14:30:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.472 ************************************ 00:22:15.472 START TEST nvmf_perf_adq 00:22:15.472 ************************************ 00:22:15.472 14:30:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:15.734 * Looking for test storage... 00:22:15.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.734 14:30:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.322 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.323 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:22.323 14:30:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:23.709 14:31:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:25.635 14:31:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:30.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:30.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:30.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:30.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.929 14:31:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.929 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:30.929 00:22:30.929 --- 10.0.0.2 ping statistics --- 00:22:30.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.929 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:30.929 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:22:30.929 00:22:30.929 --- 10.0.0.1 ping statistics --- 00:22:30.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.929 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3095722 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3095722 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 3095722 ']' 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.930 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:30.930 [2024-06-10 14:31:08.107258] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:22:30.930 [2024-06-10 14:31:08.107334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.930 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.930 [2024-06-10 14:31:08.195120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.930 [2024-06-10 14:31:08.292814] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.930 [2024-06-10 14:31:08.292870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.930 [2024-06-10 14:31:08.292879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.930 [2024-06-10 14:31:08.292886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.930 [2024-06-10 14:31:08.292892] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.930 [2024-06-10 14:31:08.293029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.930 [2024-06-10 14:31:08.293170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.930 [2024-06-10 14:31:08.293356] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.930 [2024-06-10 14:31:08.293362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.503 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:31.503 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:31.503 14:31:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.503 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:31.503 14:31:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.503 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 [2024-06-10 14:31:09.159554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 Malloc1 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.764 [2024-06-10 14:31:09.218891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3096050 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:31.764 14:31:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:31.764 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.678 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:33.678 14:31:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.678 14:31:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.678 14:31:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.678 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:33.678 "tick_rate": 2400000000, 00:22:33.678 "poll_groups": [ 00:22:33.678 { 00:22:33.678 "name": "nvmf_tgt_poll_group_000", 00:22:33.678 "admin_qpairs": 1, 00:22:33.678 "io_qpairs": 1, 00:22:33.678 "current_admin_qpairs": 1, 00:22:33.678 "current_io_qpairs": 1, 00:22:33.678 "pending_bdev_io": 0, 00:22:33.678 "completed_nvme_io": 19919, 00:22:33.678 "transports": [ 00:22:33.678 { 00:22:33.678 "trtype": "TCP" 00:22:33.678 } 00:22:33.678 ] 00:22:33.678 }, 00:22:33.678 { 00:22:33.678 "name": "nvmf_tgt_poll_group_001", 00:22:33.678 "admin_qpairs": 0, 00:22:33.678 "io_qpairs": 1, 00:22:33.678 "current_admin_qpairs": 0, 00:22:33.678 "current_io_qpairs": 1, 00:22:33.678 "pending_bdev_io": 0, 00:22:33.678 "completed_nvme_io": 28416, 00:22:33.678 "transports": [ 00:22:33.678 { 00:22:33.678 "trtype": "TCP" 00:22:33.678 } 00:22:33.678 ] 00:22:33.678 }, 00:22:33.678 { 00:22:33.678 "name": "nvmf_tgt_poll_group_002", 00:22:33.678 "admin_qpairs": 0, 00:22:33.678 "io_qpairs": 1, 00:22:33.678 "current_admin_qpairs": 0, 00:22:33.678 "current_io_qpairs": 1, 00:22:33.678 "pending_bdev_io": 0, 00:22:33.678 "completed_nvme_io": 20973, 00:22:33.678 "transports": [ 00:22:33.678 { 00:22:33.678 "trtype": "TCP" 00:22:33.678 } 00:22:33.678 ] 00:22:33.678 }, 00:22:33.678 { 00:22:33.678 "name": "nvmf_tgt_poll_group_003", 00:22:33.678 "admin_qpairs": 0, 00:22:33.678 "io_qpairs": 1, 00:22:33.678 "current_admin_qpairs": 0, 00:22:33.678 "current_io_qpairs": 1, 00:22:33.678 "pending_bdev_io": 0, 00:22:33.678 "completed_nvme_io": 19862, 00:22:33.678 "transports": [ 00:22:33.678 { 00:22:33.678 "trtype": "TCP" 00:22:33.678 } 00:22:33.678 ] 00:22:33.678 } 00:22:33.679 ] 00:22:33.679 }' 00:22:33.679 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:33.679 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:33.939 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:33.939 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:33.939 14:31:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3096050 00:22:42.077 Initializing NVMe Controllers 00:22:42.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:42.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:42.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:42.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:42.077 Initialization complete. Launching workers. 00:22:42.077 ======================================================== 00:22:42.077 Latency(us) 00:22:42.077 Device Information : IOPS MiB/s Average min max 00:22:42.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10592.40 41.38 6042.24 1886.25 10186.33 00:22:42.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15144.90 59.16 4225.37 1197.24 9179.27 00:22:42.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11176.90 43.66 5726.21 1647.58 11013.71 00:22:42.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10623.80 41.50 6024.48 1716.33 11347.81 00:22:42.077 ======================================================== 00:22:42.077 Total : 47538.00 185.70 5385.14 1197.24 11347.81 00:22:42.077 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.077 rmmod nvme_tcp 00:22:42.077 rmmod nvme_fabrics 00:22:42.077 rmmod nvme_keyring 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3095722 ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 3095722 ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3095722' 00:22:42.077 killing process with pid 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 3095722 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.077 14:31:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.622 14:31:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:44.622 14:31:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:44.622 14:31:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:46.005 14:31:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:47.978 14:31:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:53.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:53.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.265 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:53.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:53.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:22:53.266 00:22:53.266 --- 10.0.0.2 ping statistics --- 00:22:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.266 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:22:53.266 00:22:53.266 --- 10.0.0.1 ping statistics --- 00:22:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.266 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:53.266 net.core.busy_poll = 1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:53.266 net.core.busy_read = 1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3100545 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3100545 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 3100545 ']' 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:53.266 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.527 [2024-06-10 14:31:30.867530] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:22:53.527 [2024-06-10 14:31:30.867596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.527 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.527 [2024-06-10 14:31:30.955693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.527 [2024-06-10 14:31:31.051661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.527 [2024-06-10 14:31:31.051723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.527 [2024-06-10 14:31:31.051731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.527 [2024-06-10 14:31:31.051738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.527 [2024-06-10 14:31:31.051749] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.527 [2024-06-10 14:31:31.051896] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.527 [2024-06-10 14:31:31.052042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.527 [2024-06-10 14:31:31.052210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.527 [2024-06-10 14:31:31.052211] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 [2024-06-10 14:31:31.918831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 Malloc1 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.467 [2024-06-10 14:31:31.978222] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3100897 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:54.467 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:54.467 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.009 14:31:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:57.009 14:31:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.009 14:31:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.009 14:31:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.009 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:57.009 "tick_rate": 2400000000, 00:22:57.009 "poll_groups": [ 00:22:57.009 { 00:22:57.009 "name": "nvmf_tgt_poll_group_000", 00:22:57.009 "admin_qpairs": 1, 00:22:57.009 "io_qpairs": 3, 00:22:57.009 "current_admin_qpairs": 1, 00:22:57.009 "current_io_qpairs": 3, 00:22:57.009 "pending_bdev_io": 0, 00:22:57.009 "completed_nvme_io": 29652, 00:22:57.009 "transports": [ 00:22:57.009 { 00:22:57.009 "trtype": "TCP" 00:22:57.009 } 00:22:57.009 ] 00:22:57.009 }, 00:22:57.009 { 00:22:57.009 "name": "nvmf_tgt_poll_group_001", 00:22:57.009 "admin_qpairs": 0, 00:22:57.009 "io_qpairs": 1, 00:22:57.009 "current_admin_qpairs": 0, 00:22:57.009 "current_io_qpairs": 1, 00:22:57.009 "pending_bdev_io": 0, 00:22:57.009 "completed_nvme_io": 40304, 00:22:57.009 "transports": [ 00:22:57.009 { 00:22:57.009 "trtype": "TCP" 00:22:57.009 } 00:22:57.010 ] 00:22:57.010 }, 00:22:57.010 { 00:22:57.010 "name": "nvmf_tgt_poll_group_002", 00:22:57.010 "admin_qpairs": 0, 00:22:57.010 "io_qpairs": 0, 00:22:57.010 "current_admin_qpairs": 0, 00:22:57.010 "current_io_qpairs": 0, 00:22:57.010 "pending_bdev_io": 0, 00:22:57.010 "completed_nvme_io": 0, 00:22:57.010 "transports": [ 00:22:57.010 { 00:22:57.010 "trtype": "TCP" 00:22:57.010 } 00:22:57.010 ] 00:22:57.010 }, 00:22:57.010 { 00:22:57.010 "name": "nvmf_tgt_poll_group_003", 00:22:57.010 "admin_qpairs": 0, 00:22:57.010 "io_qpairs": 0, 00:22:57.010 "current_admin_qpairs": 0, 00:22:57.010 "current_io_qpairs": 0, 00:22:57.010 "pending_bdev_io": 0, 00:22:57.010 "completed_nvme_io": 0, 00:22:57.010 "transports": [ 00:22:57.010 { 00:22:57.010 "trtype": "TCP" 00:22:57.010 } 00:22:57.010 ] 00:22:57.010 } 00:22:57.010 ] 00:22:57.010 }' 00:22:57.010 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:57.010 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:57.010 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:57.010 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:57.010 14:31:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3100897 00:23:05.149 Initializing NVMe Controllers 00:23:05.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:05.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:05.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:05.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:05.149 Initialization complete. Launching workers. 00:23:05.149 ======================================================== 00:23:05.149 Latency(us) 00:23:05.149 Device Information : IOPS MiB/s Average min max 00:23:05.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5015.30 19.59 12761.51 1923.94 59226.53 00:23:05.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 21662.39 84.62 2954.55 1088.30 45320.90 00:23:05.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5311.60 20.75 12051.47 1926.69 58868.17 00:23:05.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5405.10 21.11 11842.25 1915.14 58690.64 00:23:05.149 ======================================================== 00:23:05.149 Total : 37394.39 146.07 6846.65 1088.30 59226.53 00:23:05.149 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.149 rmmod nvme_tcp 00:23:05.149 rmmod nvme_fabrics 00:23:05.149 rmmod nvme_keyring 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3100545 ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 3100545 ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3100545' 00:23:05.149 killing process with pid 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 3100545 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.149 14:31:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.150 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.150 14:31:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.064 14:31:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.064 14:31:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:07.064 00:23:07.064 real 0m51.460s 00:23:07.064 user 2m50.150s 00:23:07.064 sys 0m9.784s 00:23:07.064 14:31:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:07.064 14:31:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:07.064 ************************************ 00:23:07.064 END TEST nvmf_perf_adq 00:23:07.064 ************************************ 00:23:07.064 14:31:44 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:07.064 14:31:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:07.064 14:31:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:07.064 14:31:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.064 ************************************ 00:23:07.064 START TEST nvmf_shutdown 00:23:07.064 ************************************ 00:23:07.064 14:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:07.064 * Looking for test storage... 00:23:07.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:07.064 14:31:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.325 14:31:44 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.326 ************************************ 00:23:07.326 START TEST nvmf_shutdown_tc1 00:23:07.326 ************************************ 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.326 14:31:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:15.467 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.467 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:15.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:15.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:15.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:23:15.468 00:23:15.468 --- 10.0.0.2 ping statistics --- 00:23:15.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.468 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:23:15.468 00:23:15.468 --- 10.0.0.1 ping statistics --- 00:23:15.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.468 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3107030 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3107030 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3107030 ']' 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:15.468 14:31:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.468 [2024-06-10 14:31:52.009787] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:15.468 [2024-06-10 14:31:52.009851] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.468 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.468 [2024-06-10 14:31:52.079638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.468 [2024-06-10 14:31:52.152973] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.468 [2024-06-10 14:31:52.153010] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.468 [2024-06-10 14:31:52.153018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.468 [2024-06-10 14:31:52.153024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.468 [2024-06-10 14:31:52.153030] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.468 [2024-06-10 14:31:52.153133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.468 [2024-06-10 14:31:52.153292] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.468 [2024-06-10 14:31:52.153449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:15.468 [2024-06-10 14:31:52.153575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.468 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:15.468 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:15.468 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 [2024-06-10 14:31:52.297106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 Malloc1 00:23:15.469 [2024-06-10 14:31:52.400579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.469 Malloc2 00:23:15.469 Malloc3 00:23:15.469 Malloc4 00:23:15.469 Malloc5 00:23:15.469 Malloc6 00:23:15.469 Malloc7 00:23:15.469 Malloc8 00:23:15.469 Malloc9 00:23:15.469 Malloc10 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3107266 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3107266 /var/tmp/bdevperf.sock 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 3107266 ']' 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.469 "hdgst": ${hdgst:-false}, 00:23:15.469 "ddgst": ${ddgst:-false} 00:23:15.469 }, 00:23:15.469 "method": "bdev_nvme_attach_controller" 00:23:15.469 } 00:23:15.469 EOF 00:23:15.469 )") 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.469 "hdgst": ${hdgst:-false}, 00:23:15.469 "ddgst": ${ddgst:-false} 00:23:15.469 }, 00:23:15.469 "method": "bdev_nvme_attach_controller" 00:23:15.469 } 00:23:15.469 EOF 00:23:15.469 )") 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.469 "hdgst": ${hdgst:-false}, 00:23:15.469 "ddgst": ${ddgst:-false} 00:23:15.469 }, 00:23:15.469 "method": "bdev_nvme_attach_controller" 00:23:15.469 } 00:23:15.469 EOF 00:23:15.469 )") 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.469 "hdgst": ${hdgst:-false}, 00:23:15.469 "ddgst": ${ddgst:-false} 00:23:15.469 }, 00:23:15.469 "method": "bdev_nvme_attach_controller" 00:23:15.469 } 00:23:15.469 EOF 00:23:15.469 )") 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.469 "hdgst": ${hdgst:-false}, 00:23:15.469 "ddgst": ${ddgst:-false} 00:23:15.469 }, 00:23:15.469 "method": "bdev_nvme_attach_controller" 00:23:15.469 } 00:23:15.469 EOF 00:23:15.469 )") 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.469 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.469 { 00:23:15.469 "params": { 00:23:15.469 "name": "Nvme$subsystem", 00:23:15.469 "trtype": "$TEST_TRANSPORT", 00:23:15.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.469 "adrfam": "ipv4", 00:23:15.469 "trsvcid": "$NVMF_PORT", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.470 "hdgst": ${hdgst:-false}, 00:23:15.470 "ddgst": ${ddgst:-false} 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 } 00:23:15.470 EOF 00:23:15.470 )") 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.470 [2024-06-10 14:31:52.847816] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:15.470 [2024-06-10 14:31:52.847867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.470 { 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme$subsystem", 00:23:15.470 "trtype": "$TEST_TRANSPORT", 00:23:15.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "$NVMF_PORT", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.470 "hdgst": ${hdgst:-false}, 00:23:15.470 "ddgst": ${ddgst:-false} 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 } 00:23:15.470 EOF 00:23:15.470 )") 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.470 { 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme$subsystem", 00:23:15.470 "trtype": "$TEST_TRANSPORT", 00:23:15.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "$NVMF_PORT", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.470 "hdgst": ${hdgst:-false}, 00:23:15.470 "ddgst": ${ddgst:-false} 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 } 00:23:15.470 EOF 00:23:15.470 )") 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.470 { 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme$subsystem", 00:23:15.470 "trtype": "$TEST_TRANSPORT", 00:23:15.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "$NVMF_PORT", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.470 "hdgst": ${hdgst:-false}, 00:23:15.470 "ddgst": ${ddgst:-false} 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 } 00:23:15.470 EOF 00:23:15.470 )") 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.470 { 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme$subsystem", 00:23:15.470 "trtype": "$TEST_TRANSPORT", 00:23:15.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "$NVMF_PORT", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.470 "hdgst": ${hdgst:-false}, 00:23:15.470 "ddgst": ${ddgst:-false} 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 } 00:23:15.470 EOF 00:23:15.470 )") 00:23:15.470 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:15.470 14:31:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme1", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme2", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme3", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme4", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme5", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme6", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme7", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme8", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme9", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 },{ 00:23:15.470 "params": { 00:23:15.470 "name": "Nvme10", 00:23:15.470 "trtype": "tcp", 00:23:15.470 "traddr": "10.0.0.2", 00:23:15.470 "adrfam": "ipv4", 00:23:15.470 "trsvcid": "4420", 00:23:15.470 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:15.470 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:15.470 "hdgst": false, 00:23:15.470 "ddgst": false 00:23:15.470 }, 00:23:15.470 "method": "bdev_nvme_attach_controller" 00:23:15.470 }' 00:23:15.470 [2024-06-10 14:31:52.926488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.470 [2024-06-10 14:31:52.991419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3107266 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:16.853 14:31:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:17.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3107266 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3107030 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.796 "trsvcid": "$NVMF_PORT", 00:23:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.796 "hdgst": ${hdgst:-false}, 00:23:17.796 "ddgst": ${ddgst:-false} 00:23:17.796 }, 00:23:17.796 "method": "bdev_nvme_attach_controller" 00:23:17.796 } 00:23:17.796 EOF 00:23:17.796 )") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.796 "trsvcid": "$NVMF_PORT", 00:23:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.796 "hdgst": ${hdgst:-false}, 00:23:17.796 "ddgst": ${ddgst:-false} 00:23:17.796 }, 00:23:17.796 "method": "bdev_nvme_attach_controller" 00:23:17.796 } 00:23:17.796 EOF 00:23:17.796 )") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.796 "trsvcid": "$NVMF_PORT", 00:23:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.796 "hdgst": ${hdgst:-false}, 00:23:17.796 "ddgst": ${ddgst:-false} 00:23:17.796 }, 00:23:17.796 "method": "bdev_nvme_attach_controller" 00:23:17.796 } 00:23:17.796 EOF 00:23:17.796 )") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.796 "trsvcid": "$NVMF_PORT", 00:23:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.796 "hdgst": ${hdgst:-false}, 00:23:17.796 "ddgst": ${ddgst:-false} 00:23:17.796 }, 00:23:17.796 "method": "bdev_nvme_attach_controller" 00:23:17.796 } 00:23:17.796 EOF 00:23:17.796 )") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.796 "trsvcid": "$NVMF_PORT", 00:23:17.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.796 "hdgst": ${hdgst:-false}, 00:23:17.796 "ddgst": ${ddgst:-false} 00:23:17.796 }, 00:23:17.796 "method": "bdev_nvme_attach_controller" 00:23:17.796 } 00:23:17.796 EOF 00:23:17.796 )") 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.796 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.796 { 00:23:17.796 "params": { 00:23:17.796 "name": "Nvme$subsystem", 00:23:17.796 "trtype": "$TEST_TRANSPORT", 00:23:17.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.796 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "$NVMF_PORT", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.797 "hdgst": ${hdgst:-false}, 00:23:17.797 "ddgst": ${ddgst:-false} 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 } 00:23:17.797 EOF 00:23:17.797 )") 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.797 [2024-06-10 14:31:55.351587] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:17.797 [2024-06-10 14:31:55.351642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107773 ] 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.797 { 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme$subsystem", 00:23:17.797 "trtype": "$TEST_TRANSPORT", 00:23:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "$NVMF_PORT", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.797 "hdgst": ${hdgst:-false}, 00:23:17.797 "ddgst": ${ddgst:-false} 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 } 00:23:17.797 EOF 00:23:17.797 )") 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.797 { 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme$subsystem", 00:23:17.797 "trtype": "$TEST_TRANSPORT", 00:23:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "$NVMF_PORT", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.797 "hdgst": ${hdgst:-false}, 00:23:17.797 "ddgst": ${ddgst:-false} 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 } 00:23:17.797 EOF 00:23:17.797 )") 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.797 { 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme$subsystem", 00:23:17.797 "trtype": "$TEST_TRANSPORT", 00:23:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "$NVMF_PORT", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.797 "hdgst": ${hdgst:-false}, 00:23:17.797 "ddgst": ${ddgst:-false} 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 } 00:23:17.797 EOF 00:23:17.797 )") 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:17.797 { 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme$subsystem", 00:23:17.797 "trtype": "$TEST_TRANSPORT", 00:23:17.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "$NVMF_PORT", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.797 "hdgst": ${hdgst:-false}, 00:23:17.797 "ddgst": ${ddgst:-false} 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 } 00:23:17.797 EOF 00:23:17.797 )") 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:17.797 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:17.797 14:31:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme1", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme2", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme3", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme4", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme5", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme6", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme7", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme8", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme9", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 },{ 00:23:17.797 "params": { 00:23:17.797 "name": "Nvme10", 00:23:17.797 "trtype": "tcp", 00:23:17.797 "traddr": "10.0.0.2", 00:23:17.797 "adrfam": "ipv4", 00:23:17.797 "trsvcid": "4420", 00:23:17.797 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:17.797 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:17.797 "hdgst": false, 00:23:17.797 "ddgst": false 00:23:17.797 }, 00:23:17.797 "method": "bdev_nvme_attach_controller" 00:23:17.797 }' 00:23:18.058 [2024-06-10 14:31:55.428308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.058 [2024-06-10 14:31:55.492349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.062 Running I/O for 1 seconds... 00:23:20.448 00:23:20.448 Latency(us) 00:23:20.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.448 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme1n1 : 1.03 248.90 15.56 0.00 0.00 254427.52 21845.33 241172.48 00:23:20.448 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme2n1 : 1.16 223.73 13.98 0.00 0.00 268746.23 5570.56 242920.11 00:23:20.448 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme3n1 : 1.10 232.54 14.53 0.00 0.00 254283.31 16820.91 235929.60 00:23:20.448 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme4n1 : 1.13 227.48 14.22 0.00 0.00 264050.99 16493.23 249910.61 00:23:20.448 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme5n1 : 1.14 281.60 17.60 0.00 0.00 209744.04 11195.73 241172.48 00:23:20.448 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme6n1 : 1.17 273.25 17.08 0.00 0.00 212930.73 17367.04 232434.35 00:23:20.448 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme7n1 : 1.13 227.28 14.21 0.00 0.00 250055.47 16930.13 248162.99 00:23:20.448 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme8n1 : 1.16 226.92 14.18 0.00 0.00 241570.40 2689.71 258648.75 00:23:20.448 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme9n1 : 1.17 274.16 17.14 0.00 0.00 200881.83 15510.19 255153.49 00:23:20.448 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.448 Verification LBA range: start 0x0 length 0x400 00:23:20.448 Nvme10n1 : 1.18 271.09 16.94 0.00 0.00 199863.59 5352.11 269134.51 00:23:20.448 =================================================================================================================== 00:23:20.448 Total : 2486.96 155.43 0.00 0.00 233018.30 2689.71 269134.51 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.448 14:31:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.448 rmmod nvme_tcp 00:23:20.448 rmmod nvme_fabrics 00:23:20.709 rmmod nvme_keyring 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3107030 ']' 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3107030 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 3107030 ']' 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 3107030 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3107030 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3107030' 00:23:20.709 killing process with pid 3107030 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 3107030 00:23:20.709 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 3107030 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.970 14:31:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.884 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.884 00:23:22.884 real 0m15.729s 00:23:22.884 user 0m30.712s 00:23:22.884 sys 0m6.387s 00:23:22.884 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:22.884 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.884 ************************************ 00:23:22.884 END TEST nvmf_shutdown_tc1 00:23:22.884 ************************************ 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:23.145 ************************************ 00:23:23.145 START TEST nvmf_shutdown_tc2 00:23:23.145 ************************************ 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.145 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.146 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.407 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.407 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.407 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:23:23.407 00:23:23.407 --- 10.0.0.2 ping statistics --- 00:23:23.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.407 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:23:23.407 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:23.407 00:23:23.407 --- 10.0.0.1 ping statistics --- 00:23:23.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.408 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3108887 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3108887 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3108887 ']' 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:23.408 14:32:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.408 [2024-06-10 14:32:00.936428] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:23.408 [2024-06-10 14:32:00.936479] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.408 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.669 [2024-06-10 14:32:01.006008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.669 [2024-06-10 14:32:01.075765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.669 [2024-06-10 14:32:01.075800] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.669 [2024-06-10 14:32:01.075807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.669 [2024-06-10 14:32:01.075814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.669 [2024-06-10 14:32:01.075820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.669 [2024-06-10 14:32:01.075924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.669 [2024-06-10 14:32:01.076080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.669 [2024-06-10 14:32:01.076236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.669 [2024-06-10 14:32:01.076236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.240 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:24.240 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:24.240 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.240 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:24.240 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 [2024-06-10 14:32:01.860170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.501 14:32:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.501 Malloc1 00:23:24.501 [2024-06-10 14:32:01.963570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.501 Malloc2 00:23:24.501 Malloc3 00:23:24.501 Malloc4 00:23:24.761 Malloc5 00:23:24.761 Malloc6 00:23:24.761 Malloc7 00:23:24.761 Malloc8 00:23:24.761 Malloc9 00:23:24.761 Malloc10 00:23:24.761 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.761 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:24.761 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:24.761 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3109274 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3109274 /var/tmp/bdevperf.sock 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3109274 ']' 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.021 { 00:23:25.021 "params": { 00:23:25.021 "name": "Nvme$subsystem", 00:23:25.021 "trtype": "$TEST_TRANSPORT", 00:23:25.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.021 "adrfam": "ipv4", 00:23:25.021 "trsvcid": "$NVMF_PORT", 00:23:25.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.021 "hdgst": ${hdgst:-false}, 00:23:25.021 "ddgst": ${ddgst:-false} 00:23:25.021 }, 00:23:25.021 "method": "bdev_nvme_attach_controller" 00:23:25.021 } 00:23:25.021 EOF 00:23:25.021 )") 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.021 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.021 { 00:23:25.021 "params": { 00:23:25.021 "name": "Nvme$subsystem", 00:23:25.021 "trtype": "$TEST_TRANSPORT", 00:23:25.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.021 "adrfam": "ipv4", 00:23:25.021 "trsvcid": "$NVMF_PORT", 00:23:25.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 [2024-06-10 14:32:02.409893] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:25.022 [2024-06-10 14:32:02.409946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109274 ] 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.022 { 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme$subsystem", 00:23:25.022 "trtype": "$TEST_TRANSPORT", 00:23:25.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "$NVMF_PORT", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.022 "hdgst": ${hdgst:-false}, 00:23:25.022 "ddgst": ${ddgst:-false} 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 } 00:23:25.022 EOF 00:23:25.022 )") 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:25.022 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:25.022 14:32:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme1", 00:23:25.022 "trtype": "tcp", 00:23:25.022 "traddr": "10.0.0.2", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "4420", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.022 "hdgst": false, 00:23:25.022 "ddgst": false 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 },{ 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme2", 00:23:25.022 "trtype": "tcp", 00:23:25.022 "traddr": "10.0.0.2", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "4420", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.022 "hdgst": false, 00:23:25.022 "ddgst": false 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 },{ 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme3", 00:23:25.022 "trtype": "tcp", 00:23:25.022 "traddr": "10.0.0.2", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "4420", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:25.022 "hdgst": false, 00:23:25.022 "ddgst": false 00:23:25.022 }, 00:23:25.022 "method": "bdev_nvme_attach_controller" 00:23:25.022 },{ 00:23:25.022 "params": { 00:23:25.022 "name": "Nvme4", 00:23:25.022 "trtype": "tcp", 00:23:25.022 "traddr": "10.0.0.2", 00:23:25.022 "adrfam": "ipv4", 00:23:25.022 "trsvcid": "4420", 00:23:25.022 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:25.022 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:25.022 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme5", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme6", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme7", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme8", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme9", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 },{ 00:23:25.023 "params": { 00:23:25.023 "name": "Nvme10", 00:23:25.023 "trtype": "tcp", 00:23:25.023 "traddr": "10.0.0.2", 00:23:25.023 "adrfam": "ipv4", 00:23:25.023 "trsvcid": "4420", 00:23:25.023 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:25.023 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:25.023 "hdgst": false, 00:23:25.023 "ddgst": false 00:23:25.023 }, 00:23:25.023 "method": "bdev_nvme_attach_controller" 00:23:25.023 }' 00:23:25.023 [2024-06-10 14:32:02.485076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.023 [2024-06-10 14:32:02.549802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.407 Running I/O for 10 seconds... 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:26.407 14:32:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:26.667 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3109274 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3109274 ']' 00:23:26.927 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3109274 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3109274 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3109274' 00:23:27.189 killing process with pid 3109274 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3109274 00:23:27.189 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3109274 00:23:27.189 Received shutdown signal, test time was about 0.960673 seconds 00:23:27.189 00:23:27.189 Latency(us) 00:23:27.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.189 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme1n1 : 0.94 271.43 16.96 0.00 0.00 232934.40 15728.64 253405.87 00:23:27.189 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme2n1 : 0.96 266.73 16.67 0.00 0.00 232209.49 19005.44 235929.60 00:23:27.189 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme3n1 : 0.95 270.09 16.88 0.00 0.00 224361.81 20097.71 219327.15 00:23:27.189 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme4n1 : 0.92 208.82 13.05 0.00 0.00 283386.03 18786.99 249910.61 00:23:27.189 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme5n1 : 0.95 268.43 16.78 0.00 0.00 216174.08 21517.65 258648.75 00:23:27.189 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme6n1 : 0.94 204.09 12.76 0.00 0.00 277567.15 21408.43 255153.49 00:23:27.189 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme7n1 : 0.93 207.13 12.95 0.00 0.00 266500.27 20097.71 228939.09 00:23:27.189 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme8n1 : 0.93 205.80 12.86 0.00 0.00 262045.01 20425.39 244667.73 00:23:27.189 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme9n1 : 0.96 267.52 16.72 0.00 0.00 197336.11 24466.77 246415.36 00:23:27.189 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.189 Verification LBA range: start 0x0 length 0x400 00:23:27.189 Nvme10n1 : 0.95 202.06 12.63 0.00 0.00 254574.65 19988.48 272629.76 00:23:27.189 =================================================================================================================== 00:23:27.189 Total : 2372.10 148.26 0.00 0.00 241265.23 15728.64 272629.76 00:23:27.450 14:32:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3108887 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.393 rmmod nvme_tcp 00:23:28.393 rmmod nvme_fabrics 00:23:28.393 rmmod nvme_keyring 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3108887 ']' 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3108887 00:23:28.393 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 3108887 ']' 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 3108887 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3108887 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3108887' 00:23:28.394 killing process with pid 3108887 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 3108887 00:23:28.394 14:32:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 3108887 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.655 14:32:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.204 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.204 00:23:31.204 real 0m7.699s 00:23:31.204 user 0m23.025s 00:23:31.204 sys 0m1.222s 00:23:31.204 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:31.204 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.204 ************************************ 00:23:31.204 END TEST nvmf_shutdown_tc2 00:23:31.204 ************************************ 00:23:31.204 14:32:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:31.205 ************************************ 00:23:31.205 START TEST nvmf_shutdown_tc3 00:23:31.205 ************************************ 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:31.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:31.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:31.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:31.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.205 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:23:31.206 00:23:31.206 --- 10.0.0.2 ping statistics --- 00:23:31.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.206 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:31.206 00:23:31.206 --- 10.0.0.1 ping statistics --- 00:23:31.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.206 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3110703 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3110703 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3110703 ']' 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.206 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:31.206 [2024-06-10 14:32:08.679025] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:31.206 [2024-06-10 14:32:08.679078] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.206 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.206 [2024-06-10 14:32:08.743426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.467 [2024-06-10 14:32:08.808001] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.467 [2024-06-10 14:32:08.808036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.467 [2024-06-10 14:32:08.808044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.467 [2024-06-10 14:32:08.808050] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.467 [2024-06-10 14:32:08.808056] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.467 [2024-06-10 14:32:08.808156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.467 [2024-06-10 14:32:08.808333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.467 [2024-06-10 14:32:08.808454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.467 [2024-06-10 14:32:08.808455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.467 [2024-06-10 14:32:08.950136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.467 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.467 Malloc1 00:23:31.467 [2024-06-10 14:32:09.051056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.728 Malloc2 00:23:31.728 Malloc3 00:23:31.728 Malloc4 00:23:31.728 Malloc5 00:23:31.728 Malloc6 00:23:31.728 Malloc7 00:23:31.728 Malloc8 00:23:31.990 Malloc9 00:23:31.990 Malloc10 00:23:31.990 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.990 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3110790 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3110790 /var/tmp/bdevperf.sock 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 3110790 ']' 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.991 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.991 { 00:23:31.991 "params": { 00:23:31.991 "name": "Nvme$subsystem", 00:23:31.991 "trtype": "$TEST_TRANSPORT", 00:23:31.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.991 "adrfam": "ipv4", 00:23:31.991 "trsvcid": "$NVMF_PORT", 00:23:31.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.991 "hdgst": ${hdgst:-false}, 00:23:31.991 "ddgst": ${ddgst:-false} 00:23:31.991 }, 00:23:31.991 "method": "bdev_nvme_attach_controller" 00:23:31.991 } 00:23:31.991 EOF 00:23:31.991 )") 00:23:31.992 [2024-06-10 14:32:09.497121] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:31.992 [2024-06-10 14:32:09.497173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110790 ] 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.992 { 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme$subsystem", 00:23:31.992 "trtype": "$TEST_TRANSPORT", 00:23:31.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "$NVMF_PORT", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.992 "hdgst": ${hdgst:-false}, 00:23:31.992 "ddgst": ${ddgst:-false} 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 } 00:23:31.992 EOF 00:23:31.992 )") 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.992 { 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme$subsystem", 00:23:31.992 "trtype": "$TEST_TRANSPORT", 00:23:31.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "$NVMF_PORT", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.992 "hdgst": ${hdgst:-false}, 00:23:31.992 "ddgst": ${ddgst:-false} 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 } 00:23:31.992 EOF 00:23:31.992 )") 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.992 { 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme$subsystem", 00:23:31.992 "trtype": "$TEST_TRANSPORT", 00:23:31.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "$NVMF_PORT", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.992 "hdgst": ${hdgst:-false}, 00:23:31.992 "ddgst": ${ddgst:-false} 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 } 00:23:31.992 EOF 00:23:31.992 )") 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:31.992 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:31.992 14:32:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme1", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme2", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme3", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme4", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme5", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme6", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme7", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme8", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme9", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 },{ 00:23:31.992 "params": { 00:23:31.992 "name": "Nvme10", 00:23:31.992 "trtype": "tcp", 00:23:31.992 "traddr": "10.0.0.2", 00:23:31.992 "adrfam": "ipv4", 00:23:31.992 "trsvcid": "4420", 00:23:31.992 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:31.992 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:31.992 "hdgst": false, 00:23:31.992 "ddgst": false 00:23:31.992 }, 00:23:31.992 "method": "bdev_nvme_attach_controller" 00:23:31.992 }' 00:23:31.992 [2024-06-10 14:32:09.574069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.254 [2024-06-10 14:32:09.639266] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.637 Running I/O for 10 seconds... 00:23:33.637 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:33.637 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:33.637 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:33.637 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.637 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:33.899 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:34.160 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3110703 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 3110703 ']' 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 3110703 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3110703 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3110703' 00:23:34.432 killing process with pid 3110703 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 3110703 00:23:34.432 14:32:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 3110703 00:23:34.432 [2024-06-10 14:32:11.967755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.967999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968018] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.432 [2024-06-10 14:32:11.968082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.968204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065a00 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970911] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.970994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971086] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.433 [2024-06-10 14:32:11.971144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.971200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063960 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972324] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972420] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972490] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972650] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.972703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063e20 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973480] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973490] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.434 [2024-06-10 14:32:11.973522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973526] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973561] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973650] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973713] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973722] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.973753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10642c0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.974081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2907f80 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.974202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27917a0 is same with the state(5) to be set 00:23:34.435 [2024-06-10 14:32:11.974295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.435 [2024-06-10 14:32:11.974341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.435 [2024-06-10 14:32:11.974349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2773690 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2764650 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292ff10 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.436 [2024-06-10 14:32:11.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27888a0 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.974759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974782] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.974796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.436 [2024-06-10 14:32:11.974814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.436 [2024-06-10 14:32:11.974817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.436 [2024-06-10 14:32:11.974821] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.974907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.974945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.974968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.974982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.974996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.974999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.975004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.975011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.975020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-06-10 14:32:11.975028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 14:32:11.975036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.975053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.975060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.437 [2024-06-10 14:32:11.975067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.437 [2024-06-10 14:32:11.975074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.437 [2024-06-10 14:32:11.975083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064760 is same with the state(5) to be set 00:23:34.438 [2024-06-10 14:32:11.975091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.438 [2024-06-10 14:32:11.975683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.438 [2024-06-10 14:32:11.975689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281c8c0 is same with the state(5) to be set 00:23:34.439 [2024-06-10 14:32:11.975828] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x281c8c0 was disconnected and freed. reset controller. 00:23:34.439 [2024-06-10 14:32:11.975892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.975985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.975994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1064c00 is same with the state(5) to be set 00:23:34.439 [2024-06-10 14:32:11.976042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.439 [2024-06-10 14:32:11.976343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.439 [2024-06-10 14:32:11.976352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.440 [2024-06-10 14:32:11.976654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.440 [2024-06-10 14:32:11.976652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976731] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976795] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.440 [2024-06-10 14:32:11.976898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976911] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.976999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10650c0 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977673] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977701] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977808] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977854] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977873] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.441 [2024-06-10 14:32:11.977878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.977981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978429] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.978527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1065560 is same with the state(5) to be set 00:23:34.442 [2024-06-10 14:32:11.990526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.990835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.990909] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28b1e40 was disconnected and freed. reset controller. 00:23:34.442 [2024-06-10 14:32:11.991587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.991609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.991624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.991631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.991641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.991648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.442 [2024-06-10 14:32:11.991657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.442 [2024-06-10 14:32:11.991664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.991988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.991997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.443 [2024-06-10 14:32:11.992224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.443 [2024-06-10 14:32:11.992231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.444 [2024-06-10 14:32:11.992636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:34.444 [2024-06-10 14:32:11.992703] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x281e8b0 was disconnected and freed. reset controller. 00:23:34.444 [2024-06-10 14:32:11.992894] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2907f80 (9): Bad file descriptor 00:23:34.444 [2024-06-10 14:32:11.992916] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27917a0 (9): Bad file descriptor 00:23:34.444 [2024-06-10 14:32:11.992931] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2773690 (9): Bad file descriptor 00:23:34.444 [2024-06-10 14:32:11.992961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.992971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.992987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.992995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277f1a0 is same with the state(5) to be set 00:23:34.444 [2024-06-10 14:32:11.993044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bddd0 is same with the state(5) to be set 00:23:34.444 [2024-06-10 14:32:11.993130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.444 [2024-06-10 14:32:11.993160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.444 [2024-06-10 14:32:11.993167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.445 [2024-06-10 14:32:11.993181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bd690 is same with the state(5) to be set 00:23:34.445 [2024-06-10 14:32:11.993204] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2764650 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.993225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.445 [2024-06-10 14:32:11.993233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.445 [2024-06-10 14:32:11.993248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.445 [2024-06-10 14:32:11.993262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.445 [2024-06-10 14:32:11.993276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:11.993283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2788aa0 is same with the state(5) to be set 00:23:34.445 [2024-06-10 14:32:11.993299] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292ff10 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.993311] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27888a0 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.997152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.445 [2024-06-10 14:32:11.997179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:34.445 [2024-06-10 14:32:11.997562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:34.445 [2024-06-10 14:32:11.997587] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x277f1a0 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.997859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.445 [2024-06-10 14:32:11.997873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2764650 with addr=10.0.0.2, port=4420 00:23:34.445 [2024-06-10 14:32:11.997885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2764650 is same with the state(5) to be set 00:23:34.445 [2024-06-10 14:32:11.998029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.445 [2024-06-10 14:32:11.998038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x292ff10 with addr=10.0.0.2, port=4420 00:23:34.445 [2024-06-10 14:32:11.998045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292ff10 is same with the state(5) to be set 00:23:34.445 [2024-06-10 14:32:11.998614] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.998656] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999112] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999144] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2764650 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.999155] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292ff10 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:11.999231] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999275] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999327] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999368] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:34.445 [2024-06-10 14:32:11.999879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.445 [2024-06-10 14:32:11.999916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x277f1a0 with addr=10.0.0.2, port=4420 00:23:34.445 [2024-06-10 14:32:11.999929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277f1a0 is same with the state(5) to be set 00:23:34.445 [2024-06-10 14:32:11.999942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.445 [2024-06-10 14:32:11.999950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.445 [2024-06-10 14:32:11.999960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.445 [2024-06-10 14:32:11.999979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:34.445 [2024-06-10 14:32:11.999987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:34.445 [2024-06-10 14:32:11.999995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:34.445 [2024-06-10 14:32:12.000102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.445 [2024-06-10 14:32:12.000114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.445 [2024-06-10 14:32:12.000124] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x277f1a0 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:12.000161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:34.445 [2024-06-10 14:32:12.000168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:34.445 [2024-06-10 14:32:12.000175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:34.445 [2024-06-10 14:32:12.000215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.445 [2024-06-10 14:32:12.002905] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bddd0 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:12.002927] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bd690 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:12.002948] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2788aa0 (9): Bad file descriptor 00:23:34.445 [2024-06-10 14:32:12.003055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.445 [2024-06-10 14:32:12.003251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.445 [2024-06-10 14:32:12.003263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.446 [2024-06-10 14:32:12.003757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.446 [2024-06-10 14:32:12.003766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.003984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.003993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.004097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.004105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28b32d0 is same with the state(5) to be set 00:23:34.447 [2024-06-10 14:32:12.005406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.447 [2024-06-10 14:32:12.005622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.447 [2024-06-10 14:32:12.005629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.005992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.005999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.448 [2024-06-10 14:32:12.006227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.448 [2024-06-10 14:32:12.006236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.006476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.006484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275e6b0 is same with the state(5) to be set 00:23:34.449 [2024-06-10 14:32:12.007745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.007988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.007995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.449 [2024-06-10 14:32:12.008118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.449 [2024-06-10 14:32:12.008125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.450 [2024-06-10 14:32:12.008683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.450 [2024-06-10 14:32:12.008692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.008796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.008804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x275fae0 is same with the state(5) to be set 00:23:34.451 [2024-06-10 14:32:12.010087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.451 [2024-06-10 14:32:12.010441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.451 [2024-06-10 14:32:12.010447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.010996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.011005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.452 [2024-06-10 14:32:12.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.452 [2024-06-10 14:32:12.011023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.453 [2024-06-10 14:32:12.011128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.453 [2024-06-10 14:32:12.011136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28225e0 is same with the state(5) to be set 00:23:34.453 [2024-06-10 14:32:12.012930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:34.453 [2024-06-10 14:32:12.012953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:34.453 [2024-06-10 14:32:12.012962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:34.453 [2024-06-10 14:32:12.012971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:34.453 [2024-06-10 14:32:12.013489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.453 [2024-06-10 14:32:12.013505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27917a0 with addr=10.0.0.2, port=4420 00:23:34.453 [2024-06-10 14:32:12.013513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27917a0 is same with the state(5) to be set 00:23:34.453 [2024-06-10 14:32:12.013807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.453 [2024-06-10 14:32:12.013816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2773690 with addr=10.0.0.2, port=4420 00:23:34.453 [2024-06-10 14:32:12.013823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2773690 is same with the state(5) to be set 00:23:34.453 [2024-06-10 14:32:12.014130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.453 [2024-06-10 14:32:12.014139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27888a0 with addr=10.0.0.2, port=4420 00:23:34.453 [2024-06-10 14:32:12.014150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27888a0 is same with the state(5) to be set 00:23:34.453 [2024-06-10 14:32:12.014478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.453 [2024-06-10 14:32:12.014488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2907f80 with addr=10.0.0.2, port=4420 00:23:34.453 [2024-06-10 14:32:12.014495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2907f80 is same with the state(5) to be set 00:23:34.728 [2024-06-10 14:32:12.015304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.728 [2024-06-10 14:32:12.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.729 [2024-06-10 14:32:12.015798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.729 [2024-06-10 14:32:12.015806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.015992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.015999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.730 [2024-06-10 14:32:12.016248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.730 [2024-06-10 14:32:12.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.016271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.016305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.016325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.016348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281d420 is same with the state(5) to be set 00:23:34.731 [2024-06-10 14:32:12.017616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.731 [2024-06-10 14:32:12.017915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.731 [2024-06-10 14:32:12.017921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.017937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.017946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.017953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.017962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.017971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.017987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.017996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.732 [2024-06-10 14:32:12.018435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.732 [2024-06-10 14:32:12.018441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.018652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.018660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x281fd80 is same with the state(5) to be set 00:23:34.733 [2024-06-10 14:32:12.019924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.019936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.019946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.019953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.019969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.019978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.019985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.019994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.733 [2024-06-10 14:32:12.020124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.733 [2024-06-10 14:32:12.020131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.734 [2024-06-10 14:32:12.020624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.734 [2024-06-10 14:32:12.020633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.735 [2024-06-10 14:32:12.020953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.735 [2024-06-10 14:32:12.020962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:34.736 [2024-06-10 14:32:12.020969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.736 [2024-06-10 14:32:12.020976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2821100 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.022696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:34.736 [2024-06-10 14:32:12.022718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.736 [2024-06-10 14:32:12.022727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:34.736 [2024-06-10 14:32:12.022737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:34.736 [2024-06-10 14:32:12.022745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:34.736 task offset: 24576 on job bdev=Nvme1n1 fails 00:23:34.736 00:23:34.736 Latency(us) 00:23:34.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.736 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme1n1 : 0.96 200.61 12.54 66.87 0.00 236577.49 20316.16 253405.87 00:23:34.736 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme2n1 : 0.96 200.37 12.52 66.79 0.00 232012.16 19988.48 230686.72 00:23:34.736 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme3n1 : 0.97 198.37 12.40 66.12 0.00 229577.81 21189.97 251658.24 00:23:34.736 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme4n1 ended in about 0.97 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme4n1 : 0.97 197.89 12.37 65.96 0.00 225302.83 19770.03 217579.52 00:23:34.736 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme5n1 : 0.97 197.42 12.34 65.81 0.00 221034.67 23702.19 241172.48 00:23:34.736 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme6n1 : 0.98 134.69 8.42 65.30 0.00 284908.56 18677.76 258648.75 00:23:34.736 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme7n1 : 0.96 200.09 12.51 66.70 0.00 208084.75 4614.83 237677.23 00:23:34.736 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme8n1 : 0.98 130.30 8.14 65.15 0.00 278729.67 15619.41 286610.77 00:23:34.736 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme9n1 : 0.98 129.99 8.12 65.00 0.00 273034.24 15400.96 251658.24 00:23:34.736 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.736 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:34.736 Verification LBA range: start 0x0 length 0x400 00:23:34.736 Nvme10n1 : 0.97 131.30 8.21 65.65 0.00 263423.15 24248.32 279620.27 00:23:34.736 =================================================================================================================== 00:23:34.736 Total : 1721.03 107.56 659.35 0.00 242036.81 4614.83 286610.77 00:23:34.736 [2024-06-10 14:32:12.047025] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:34.736 [2024-06-10 14:32:12.047108] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27917a0 (9): Bad file descriptor 00:23:34.736 [2024-06-10 14:32:12.047121] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2773690 (9): Bad file descriptor 00:23:34.736 [2024-06-10 14:32:12.047130] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27888a0 (9): Bad file descriptor 00:23:34.736 [2024-06-10 14:32:12.047141] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2907f80 (9): Bad file descriptor 00:23:34.736 [2024-06-10 14:32:12.047248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:34.736 [2024-06-10 14:32:12.047701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.736 [2024-06-10 14:32:12.047717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x292ff10 with addr=10.0.0.2, port=4420 00:23:34.736 [2024-06-10 14:32:12.047726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292ff10 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.047966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.736 [2024-06-10 14:32:12.047975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2764650 with addr=10.0.0.2, port=4420 00:23:34.736 [2024-06-10 14:32:12.047982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2764650 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.048160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.736 [2024-06-10 14:32:12.048174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x277f1a0 with addr=10.0.0.2, port=4420 00:23:34.736 [2024-06-10 14:32:12.048182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x277f1a0 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.048476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.736 [2024-06-10 14:32:12.048486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2788aa0 with addr=10.0.0.2, port=4420 00:23:34.736 [2024-06-10 14:32:12.048493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2788aa0 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.048800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.736 [2024-06-10 14:32:12.048809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28bd690 with addr=10.0.0.2, port=4420 00:23:34.736 [2024-06-10 14:32:12.048815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bd690 is same with the state(5) to be set 00:23:34.736 [2024-06-10 14:32:12.048823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:34.736 [2024-06-10 14:32:12.048829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:34.736 [2024-06-10 14:32:12.048837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:34.736 [2024-06-10 14:32:12.048848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.048854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.048861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:34.737 [2024-06-10 14:32:12.048870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.048876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.048883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:34.737 [2024-06-10 14:32:12.048893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.048900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.048906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:34.737 [2024-06-10 14:32:12.048927] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.737 [2024-06-10 14:32:12.048938] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.737 [2024-06-10 14:32:12.048950] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.737 [2024-06-10 14:32:12.048960] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:34.737 [2024-06-10 14:32:12.049804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.049815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.049821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.049828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.737 [2024-06-10 14:32:12.050185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28bddd0 with addr=10.0.0.2, port=4420 00:23:34.737 [2024-06-10 14:32:12.050196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bddd0 is same with the state(5) to be set 00:23:34.737 [2024-06-10 14:32:12.050206] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292ff10 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050215] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2764650 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050224] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x277f1a0 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050233] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2788aa0 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050242] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bd690 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050514] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bddd0 (9): Bad file descriptor 00:23:34.737 [2024-06-10 14:32:12.050527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050701] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 [2024-06-10 14:32:12.050707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:34.737 [2024-06-10 14:32:12.050714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:34.737 [2024-06-10 14:32:12.050720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:34.737 [2024-06-10 14:32:12.050750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.737 14:32:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:34.737 14:32:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:35.680 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3110790 00:23:35.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3110790) - No such process 00:23:35.680 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:35.680 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:35.680 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:35.680 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.941 rmmod nvme_tcp 00:23:35.941 rmmod nvme_fabrics 00:23:35.941 rmmod nvme_keyring 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.941 14:32:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:37.885 00:23:37.885 real 0m7.112s 00:23:37.885 user 0m16.502s 00:23:37.885 sys 0m1.174s 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.885 ************************************ 00:23:37.885 END TEST nvmf_shutdown_tc3 00:23:37.885 ************************************ 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:37.885 00:23:37.885 real 0m30.914s 00:23:37.885 user 1m10.391s 00:23:37.885 sys 0m9.029s 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:37.885 14:32:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:37.885 ************************************ 00:23:37.885 END TEST nvmf_shutdown 00:23:37.885 ************************************ 00:23:38.147 14:32:15 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 14:32:15 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 14:32:15 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:38.147 14:32:15 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:38.147 14:32:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.147 ************************************ 00:23:38.147 START TEST nvmf_multicontroller 00:23:38.147 ************************************ 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:38.147 * Looking for test storage... 00:23:38.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.147 14:32:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.731 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.731 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.732 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.732 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.732 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.732 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.732 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.733 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.733 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:23:44.992 00:23:44.992 --- 10.0.0.2 ping statistics --- 00:23:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.992 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:23:44.992 00:23:44.992 --- 10.0.0.1 ping statistics --- 00:23:44.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.992 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:44.992 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3115826 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3115826 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 3115826 ']' 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:45.252 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.252 [2024-06-10 14:32:22.677772] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:45.252 [2024-06-10 14:32:22.677823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.252 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.252 [2024-06-10 14:32:22.743895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:45.252 [2024-06-10 14:32:22.807983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.252 [2024-06-10 14:32:22.808019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.252 [2024-06-10 14:32:22.808028] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.252 [2024-06-10 14:32:22.808034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.252 [2024-06-10 14:32:22.808040] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.252 [2024-06-10 14:32:22.808145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.252 [2024-06-10 14:32:22.808302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.252 [2024-06-10 14:32:22.808303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 [2024-06-10 14:32:22.945888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 Malloc0 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 [2024-06-10 14:32:23.010662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 [2024-06-10 14:32:23.022609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 Malloc1 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3115863 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3115863 /var/tmp/bdevperf.sock 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 3115863 ']' 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:45.512 14:32:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.453 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:46.453 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:46.453 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:46.453 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.453 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.714 NVMe0n1 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.714 1 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.714 request: 00:23:46.714 { 00:23:46.714 "name": "NVMe0", 00:23:46.714 "trtype": "tcp", 00:23:46.714 "traddr": "10.0.0.2", 00:23:46.714 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:46.714 "hostaddr": "10.0.0.2", 00:23:46.714 "hostsvcid": "60000", 00:23:46.714 "adrfam": "ipv4", 00:23:46.714 "trsvcid": "4420", 00:23:46.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.714 "method": "bdev_nvme_attach_controller", 00:23:46.714 "req_id": 1 00:23:46.714 } 00:23:46.714 Got JSON-RPC error response 00:23:46.714 response: 00:23:46.714 { 00:23:46.714 "code": -114, 00:23:46.714 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:46.714 } 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:46.714 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.715 request: 00:23:46.715 { 00:23:46.715 "name": "NVMe0", 00:23:46.715 "trtype": "tcp", 00:23:46.715 "traddr": "10.0.0.2", 00:23:46.715 "hostaddr": "10.0.0.2", 00:23:46.715 "hostsvcid": "60000", 00:23:46.715 "adrfam": "ipv4", 00:23:46.715 "trsvcid": "4420", 00:23:46.715 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.715 "method": "bdev_nvme_attach_controller", 00:23:46.715 "req_id": 1 00:23:46.715 } 00:23:46.715 Got JSON-RPC error response 00:23:46.715 response: 00:23:46.715 { 00:23:46.715 "code": -114, 00:23:46.715 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:46.715 } 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.715 request: 00:23:46.715 { 00:23:46.715 "name": "NVMe0", 00:23:46.715 "trtype": "tcp", 00:23:46.715 "traddr": "10.0.0.2", 00:23:46.715 "hostaddr": "10.0.0.2", 00:23:46.715 "hostsvcid": "60000", 00:23:46.715 "adrfam": "ipv4", 00:23:46.715 "trsvcid": "4420", 00:23:46.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.715 "multipath": "disable", 00:23:46.715 "method": "bdev_nvme_attach_controller", 00:23:46.715 "req_id": 1 00:23:46.715 } 00:23:46.715 Got JSON-RPC error response 00:23:46.715 response: 00:23:46.715 { 00:23:46.715 "code": -114, 00:23:46.715 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:46.715 } 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.715 request: 00:23:46.715 { 00:23:46.715 "name": "NVMe0", 00:23:46.715 "trtype": "tcp", 00:23:46.715 "traddr": "10.0.0.2", 00:23:46.715 "hostaddr": "10.0.0.2", 00:23:46.715 "hostsvcid": "60000", 00:23:46.715 "adrfam": "ipv4", 00:23:46.715 "trsvcid": "4420", 00:23:46.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.715 "multipath": "failover", 00:23:46.715 "method": "bdev_nvme_attach_controller", 00:23:46.715 "req_id": 1 00:23:46.715 } 00:23:46.715 Got JSON-RPC error response 00:23:46.715 response: 00:23:46.715 { 00:23:46.715 "code": -114, 00:23:46.715 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:46.715 } 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.715 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.715 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.976 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:46.976 14:32:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.361 0 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 3115863 ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3115863' 00:23:48.361 killing process with pid 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 3115863 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:23:48.361 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:48.361 [2024-06-10 14:32:23.141845] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:48.361 [2024-06-10 14:32:23.141907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115863 ] 00:23:48.361 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.361 [2024-06-10 14:32:23.218975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.361 [2024-06-10 14:32:23.283657] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.361 [2024-06-10 14:32:24.412551] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name d0eff16f-5a93-4919-95a6-d965c5923660 already exists 00:23:48.361 [2024-06-10 14:32:24.412582] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:d0eff16f-5a93-4919-95a6-d965c5923660 alias for bdev NVMe1n1 00:23:48.361 [2024-06-10 14:32:24.412592] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:48.361 Running I/O for 1 seconds... 00:23:48.361 00:23:48.361 Latency(us) 00:23:48.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.361 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:48.361 NVMe0n1 : 1.00 20307.87 79.33 0.00 0.00 6290.23 2088.96 16056.32 00:23:48.361 =================================================================================================================== 00:23:48.361 Total : 20307.87 79.33 0.00 0.00 6290.23 2088.96 16056.32 00:23:48.361 Received shutdown signal, test time was about 1.000000 seconds 00:23:48.361 00:23:48.361 Latency(us) 00:23:48.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.361 =================================================================================================================== 00:23:48.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.361 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.361 rmmod nvme_tcp 00:23:48.361 rmmod nvme_fabrics 00:23:48.361 rmmod nvme_keyring 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3115826 ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3115826 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 3115826 ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 3115826 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3115826 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3115826' 00:23:48.361 killing process with pid 3115826 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 3115826 00:23:48.361 14:32:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 3115826 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.622 14:32:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.171 14:32:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.171 00:23:51.171 real 0m12.575s 00:23:51.171 user 0m14.786s 00:23:51.171 sys 0m5.772s 00:23:51.171 14:32:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:51.171 14:32:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.171 ************************************ 00:23:51.171 END TEST nvmf_multicontroller 00:23:51.171 ************************************ 00:23:51.171 14:32:28 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:51.171 14:32:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:51.171 14:32:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:51.171 14:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.171 ************************************ 00:23:51.171 START TEST nvmf_aer 00:23:51.171 ************************************ 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:51.171 * Looking for test storage... 00:23:51.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.171 14:32:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.172 14:32:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:57.765 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:57.765 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:57.765 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:57.765 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.765 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:23:58.026 00:23:58.026 --- 10.0.0.2 ping statistics --- 00:23:58.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.026 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:58.026 00:23:58.026 --- 10.0.0.1 ping statistics --- 00:23:58.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.026 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3120536 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3120536 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 3120536 ']' 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:58.026 14:32:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:58.026 [2024-06-10 14:32:35.593012] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:23:58.026 [2024-06-10 14:32:35.593058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.287 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.288 [2024-06-10 14:32:35.676943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.288 [2024-06-10 14:32:35.756338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.288 [2024-06-10 14:32:35.756393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.288 [2024-06-10 14:32:35.756401] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.288 [2024-06-10 14:32:35.756408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.288 [2024-06-10 14:32:35.756413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.288 [2024-06-10 14:32:35.756539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.288 [2024-06-10 14:32:35.756664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.288 [2024-06-10 14:32:35.756832] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.288 [2024-06-10 14:32:35.756833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.231 [2024-06-10 14:32:36.517195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.231 Malloc0 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.231 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.232 [2024-06-10 14:32:36.576610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.232 [ 00:23:59.232 { 00:23:59.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:59.232 "subtype": "Discovery", 00:23:59.232 "listen_addresses": [], 00:23:59.232 "allow_any_host": true, 00:23:59.232 "hosts": [] 00:23:59.232 }, 00:23:59.232 { 00:23:59.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.232 "subtype": "NVMe", 00:23:59.232 "listen_addresses": [ 00:23:59.232 { 00:23:59.232 "trtype": "TCP", 00:23:59.232 "adrfam": "IPv4", 00:23:59.232 "traddr": "10.0.0.2", 00:23:59.232 "trsvcid": "4420" 00:23:59.232 } 00:23:59.232 ], 00:23:59.232 "allow_any_host": true, 00:23:59.232 "hosts": [], 00:23:59.232 "serial_number": "SPDK00000000000001", 00:23:59.232 "model_number": "SPDK bdev Controller", 00:23:59.232 "max_namespaces": 2, 00:23:59.232 "min_cntlid": 1, 00:23:59.232 "max_cntlid": 65519, 00:23:59.232 "namespaces": [ 00:23:59.232 { 00:23:59.232 "nsid": 1, 00:23:59.232 "bdev_name": "Malloc0", 00:23:59.232 "name": "Malloc0", 00:23:59.232 "nguid": "7015916EF4BA4DA5883BF276823F20B4", 00:23:59.232 "uuid": "7015916e-f4ba-4da5-883b-f276823f20b4" 00:23:59.232 } 00:23:59.232 ] 00:23:59.232 } 00:23:59.232 ] 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3120889 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:59.232 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:23:59.232 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.493 Malloc1 00:23:59.493 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 Asynchronous Event Request test 00:23:59.494 Attaching to 10.0.0.2 00:23:59.494 Attached to 10.0.0.2 00:23:59.494 Registering asynchronous event callbacks... 00:23:59.494 Starting namespace attribute notice tests for all controllers... 00:23:59.494 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:59.494 aer_cb - Changed Namespace 00:23:59.494 Cleaning up... 00:23:59.494 [ 00:23:59.494 { 00:23:59.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:59.494 "subtype": "Discovery", 00:23:59.494 "listen_addresses": [], 00:23:59.494 "allow_any_host": true, 00:23:59.494 "hosts": [] 00:23:59.494 }, 00:23:59.494 { 00:23:59.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.494 "subtype": "NVMe", 00:23:59.494 "listen_addresses": [ 00:23:59.494 { 00:23:59.494 "trtype": "TCP", 00:23:59.494 "adrfam": "IPv4", 00:23:59.494 "traddr": "10.0.0.2", 00:23:59.494 "trsvcid": "4420" 00:23:59.494 } 00:23:59.494 ], 00:23:59.494 "allow_any_host": true, 00:23:59.494 "hosts": [], 00:23:59.494 "serial_number": "SPDK00000000000001", 00:23:59.494 "model_number": "SPDK bdev Controller", 00:23:59.494 "max_namespaces": 2, 00:23:59.494 "min_cntlid": 1, 00:23:59.494 "max_cntlid": 65519, 00:23:59.494 "namespaces": [ 00:23:59.494 { 00:23:59.494 "nsid": 1, 00:23:59.494 "bdev_name": "Malloc0", 00:23:59.494 "name": "Malloc0", 00:23:59.494 "nguid": "7015916EF4BA4DA5883BF276823F20B4", 00:23:59.494 "uuid": "7015916e-f4ba-4da5-883b-f276823f20b4" 00:23:59.494 }, 00:23:59.494 { 00:23:59.494 "nsid": 2, 00:23:59.494 "bdev_name": "Malloc1", 00:23:59.494 "name": "Malloc1", 00:23:59.494 "nguid": "04AB7D1E70324A58B826F9C8C81B513A", 00:23:59.494 "uuid": "04ab7d1e-7032-4a58-b826-f9c8c81b513a" 00:23:59.494 } 00:23:59.494 ] 00:23:59.494 } 00:23:59.494 ] 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3120889 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.494 14:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.494 rmmod nvme_tcp 00:23:59.494 rmmod nvme_fabrics 00:23:59.494 rmmod nvme_keyring 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3120536 ']' 00:23:59.494 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 3120536 ']' 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3120536' 00:23:59.755 killing process with pid 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 3120536 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.755 14:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.304 14:32:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.304 00:24:02.304 real 0m11.110s 00:24:02.304 user 0m8.322s 00:24:02.304 sys 0m5.699s 00:24:02.304 14:32:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:02.304 14:32:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:02.304 ************************************ 00:24:02.304 END TEST nvmf_aer 00:24:02.304 ************************************ 00:24:02.304 14:32:39 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:02.304 14:32:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:02.304 14:32:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:02.304 14:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.304 ************************************ 00:24:02.304 START TEST nvmf_async_init 00:24:02.304 ************************************ 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:02.304 * Looking for test storage... 00:24:02.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:02.304 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0fcf6eef235b4262911279d6bfdeabd1 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.305 14:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:08.931 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:08.931 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:08.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:08.931 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.931 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:24:09.192 00:24:09.192 --- 10.0.0.2 ping statistics --- 00:24:09.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.192 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:24:09.192 00:24:09.192 --- 10.0.0.1 ping statistics --- 00:24:09.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.192 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.192 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3125033 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3125033 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 3125033 ']' 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:09.453 14:32:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.453 [2024-06-10 14:32:46.873059] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:09.453 [2024-06-10 14:32:46.873117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.453 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.453 [2024-06-10 14:32:46.957878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.453 [2024-06-10 14:32:47.022111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.453 [2024-06-10 14:32:47.022145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.453 [2024-06-10 14:32:47.022152] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.453 [2024-06-10 14:32:47.022159] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.453 [2024-06-10 14:32:47.022164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.453 [2024-06-10 14:32:47.022183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 [2024-06-10 14:32:47.786272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 null0 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0fcf6eef235b4262911279d6bfdeabd1 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.396 [2024-06-10 14:32:47.826521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.396 14:32:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.657 nvme0n1 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.657 [ 00:24:10.657 { 00:24:10.657 "name": "nvme0n1", 00:24:10.657 "aliases": [ 00:24:10.657 "0fcf6eef-235b-4262-9112-79d6bfdeabd1" 00:24:10.657 ], 00:24:10.657 "product_name": "NVMe disk", 00:24:10.657 "block_size": 512, 00:24:10.657 "num_blocks": 2097152, 00:24:10.657 "uuid": "0fcf6eef-235b-4262-9112-79d6bfdeabd1", 00:24:10.657 "assigned_rate_limits": { 00:24:10.657 "rw_ios_per_sec": 0, 00:24:10.657 "rw_mbytes_per_sec": 0, 00:24:10.657 "r_mbytes_per_sec": 0, 00:24:10.657 "w_mbytes_per_sec": 0 00:24:10.657 }, 00:24:10.657 "claimed": false, 00:24:10.657 "zoned": false, 00:24:10.657 "supported_io_types": { 00:24:10.657 "read": true, 00:24:10.657 "write": true, 00:24:10.657 "unmap": false, 00:24:10.657 "write_zeroes": true, 00:24:10.657 "flush": true, 00:24:10.657 "reset": true, 00:24:10.657 "compare": true, 00:24:10.657 "compare_and_write": true, 00:24:10.657 "abort": true, 00:24:10.657 "nvme_admin": true, 00:24:10.657 "nvme_io": true 00:24:10.657 }, 00:24:10.657 "memory_domains": [ 00:24:10.657 { 00:24:10.657 "dma_device_id": "system", 00:24:10.657 "dma_device_type": 1 00:24:10.657 } 00:24:10.657 ], 00:24:10.657 "driver_specific": { 00:24:10.657 "nvme": [ 00:24:10.657 { 00:24:10.657 "trid": { 00:24:10.657 "trtype": "TCP", 00:24:10.657 "adrfam": "IPv4", 00:24:10.657 "traddr": "10.0.0.2", 00:24:10.657 "trsvcid": "4420", 00:24:10.657 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:10.657 }, 00:24:10.657 "ctrlr_data": { 00:24:10.657 "cntlid": 1, 00:24:10.657 "vendor_id": "0x8086", 00:24:10.657 "model_number": "SPDK bdev Controller", 00:24:10.657 "serial_number": "00000000000000000000", 00:24:10.657 "firmware_revision": "24.09", 00:24:10.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.657 "oacs": { 00:24:10.657 "security": 0, 00:24:10.657 "format": 0, 00:24:10.657 "firmware": 0, 00:24:10.657 "ns_manage": 0 00:24:10.657 }, 00:24:10.657 "multi_ctrlr": true, 00:24:10.657 "ana_reporting": false 00:24:10.657 }, 00:24:10.657 "vs": { 00:24:10.657 "nvme_version": "1.3" 00:24:10.657 }, 00:24:10.657 "ns_data": { 00:24:10.657 "id": 1, 00:24:10.657 "can_share": true 00:24:10.657 } 00:24:10.657 } 00:24:10.657 ], 00:24:10.657 "mp_policy": "active_passive" 00:24:10.657 } 00:24:10.657 } 00:24:10.657 ] 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.657 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.658 [2024-06-10 14:32:48.082994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:10.658 [2024-06-10 14:32:48.083085] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e8b20 (9): Bad file descriptor 00:24:10.658 [2024-06-10 14:32:48.215430] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.658 [ 00:24:10.658 { 00:24:10.658 "name": "nvme0n1", 00:24:10.658 "aliases": [ 00:24:10.658 "0fcf6eef-235b-4262-9112-79d6bfdeabd1" 00:24:10.658 ], 00:24:10.658 "product_name": "NVMe disk", 00:24:10.658 "block_size": 512, 00:24:10.658 "num_blocks": 2097152, 00:24:10.658 "uuid": "0fcf6eef-235b-4262-9112-79d6bfdeabd1", 00:24:10.658 "assigned_rate_limits": { 00:24:10.658 "rw_ios_per_sec": 0, 00:24:10.658 "rw_mbytes_per_sec": 0, 00:24:10.658 "r_mbytes_per_sec": 0, 00:24:10.658 "w_mbytes_per_sec": 0 00:24:10.658 }, 00:24:10.658 "claimed": false, 00:24:10.658 "zoned": false, 00:24:10.658 "supported_io_types": { 00:24:10.658 "read": true, 00:24:10.658 "write": true, 00:24:10.658 "unmap": false, 00:24:10.658 "write_zeroes": true, 00:24:10.658 "flush": true, 00:24:10.658 "reset": true, 00:24:10.658 "compare": true, 00:24:10.658 "compare_and_write": true, 00:24:10.658 "abort": true, 00:24:10.658 "nvme_admin": true, 00:24:10.658 "nvme_io": true 00:24:10.658 }, 00:24:10.658 "memory_domains": [ 00:24:10.658 { 00:24:10.658 "dma_device_id": "system", 00:24:10.658 "dma_device_type": 1 00:24:10.658 } 00:24:10.658 ], 00:24:10.658 "driver_specific": { 00:24:10.658 "nvme": [ 00:24:10.658 { 00:24:10.658 "trid": { 00:24:10.658 "trtype": "TCP", 00:24:10.658 "adrfam": "IPv4", 00:24:10.658 "traddr": "10.0.0.2", 00:24:10.658 "trsvcid": "4420", 00:24:10.658 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:10.658 }, 00:24:10.658 "ctrlr_data": { 00:24:10.658 "cntlid": 2, 00:24:10.658 "vendor_id": "0x8086", 00:24:10.658 "model_number": "SPDK bdev Controller", 00:24:10.658 "serial_number": "00000000000000000000", 00:24:10.658 "firmware_revision": "24.09", 00:24:10.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.658 "oacs": { 00:24:10.658 "security": 0, 00:24:10.658 "format": 0, 00:24:10.658 "firmware": 0, 00:24:10.658 "ns_manage": 0 00:24:10.658 }, 00:24:10.658 "multi_ctrlr": true, 00:24:10.658 "ana_reporting": false 00:24:10.658 }, 00:24:10.658 "vs": { 00:24:10.658 "nvme_version": "1.3" 00:24:10.658 }, 00:24:10.658 "ns_data": { 00:24:10.658 "id": 1, 00:24:10.658 "can_share": true 00:24:10.658 } 00:24:10.658 } 00:24:10.658 ], 00:24:10.658 "mp_policy": "active_passive" 00:24:10.658 } 00:24:10.658 } 00:24:10.658 ] 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.658 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l06Hy2kFmi 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l06Hy2kFmi 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 [2024-06-10 14:32:48.283629] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.920 [2024-06-10 14:32:48.283790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l06Hy2kFmi 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 [2024-06-10 14:32:48.291644] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l06Hy2kFmi 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 [2024-06-10 14:32:48.299667] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.920 [2024-06-10 14:32:48.299719] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:10.920 nvme0n1 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.920 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.920 [ 00:24:10.920 { 00:24:10.920 "name": "nvme0n1", 00:24:10.920 "aliases": [ 00:24:10.920 "0fcf6eef-235b-4262-9112-79d6bfdeabd1" 00:24:10.920 ], 00:24:10.920 "product_name": "NVMe disk", 00:24:10.920 "block_size": 512, 00:24:10.920 "num_blocks": 2097152, 00:24:10.920 "uuid": "0fcf6eef-235b-4262-9112-79d6bfdeabd1", 00:24:10.920 "assigned_rate_limits": { 00:24:10.920 "rw_ios_per_sec": 0, 00:24:10.920 "rw_mbytes_per_sec": 0, 00:24:10.920 "r_mbytes_per_sec": 0, 00:24:10.920 "w_mbytes_per_sec": 0 00:24:10.920 }, 00:24:10.920 "claimed": false, 00:24:10.920 "zoned": false, 00:24:10.920 "supported_io_types": { 00:24:10.920 "read": true, 00:24:10.920 "write": true, 00:24:10.920 "unmap": false, 00:24:10.920 "write_zeroes": true, 00:24:10.920 "flush": true, 00:24:10.920 "reset": true, 00:24:10.920 "compare": true, 00:24:10.920 "compare_and_write": true, 00:24:10.920 "abort": true, 00:24:10.920 "nvme_admin": true, 00:24:10.920 "nvme_io": true 00:24:10.920 }, 00:24:10.920 "memory_domains": [ 00:24:10.920 { 00:24:10.920 "dma_device_id": "system", 00:24:10.920 "dma_device_type": 1 00:24:10.920 } 00:24:10.920 ], 00:24:10.920 "driver_specific": { 00:24:10.920 "nvme": [ 00:24:10.920 { 00:24:10.920 "trid": { 00:24:10.920 "trtype": "TCP", 00:24:10.920 "adrfam": "IPv4", 00:24:10.920 "traddr": "10.0.0.2", 00:24:10.920 "trsvcid": "4421", 00:24:10.920 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:10.920 }, 00:24:10.920 "ctrlr_data": { 00:24:10.921 "cntlid": 3, 00:24:10.921 "vendor_id": "0x8086", 00:24:10.921 "model_number": "SPDK bdev Controller", 00:24:10.921 "serial_number": "00000000000000000000", 00:24:10.921 "firmware_revision": "24.09", 00:24:10.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.921 "oacs": { 00:24:10.921 "security": 0, 00:24:10.921 "format": 0, 00:24:10.921 "firmware": 0, 00:24:10.921 "ns_manage": 0 00:24:10.921 }, 00:24:10.921 "multi_ctrlr": true, 00:24:10.921 "ana_reporting": false 00:24:10.921 }, 00:24:10.921 "vs": { 00:24:10.921 "nvme_version": "1.3" 00:24:10.921 }, 00:24:10.921 "ns_data": { 00:24:10.921 "id": 1, 00:24:10.921 "can_share": true 00:24:10.921 } 00:24:10.921 } 00:24:10.921 ], 00:24:10.921 "mp_policy": "active_passive" 00:24:10.921 } 00:24:10.921 } 00:24:10.921 ] 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.l06Hy2kFmi 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.921 rmmod nvme_tcp 00:24:10.921 rmmod nvme_fabrics 00:24:10.921 rmmod nvme_keyring 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3125033 ']' 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3125033 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 3125033 ']' 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 3125033 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:10.921 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3125033 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3125033' 00:24:11.181 killing process with pid 3125033 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 3125033 00:24:11.181 [2024-06-10 14:32:48.554754] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.181 [2024-06-10 14:32:48.554794] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 3125033 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.181 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.182 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.182 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.182 14:32:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.182 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.182 14:32:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.732 14:32:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.732 00:24:13.732 real 0m11.352s 00:24:13.732 user 0m4.173s 00:24:13.732 sys 0m5.734s 00:24:13.732 14:32:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:13.732 14:32:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.732 ************************************ 00:24:13.732 END TEST nvmf_async_init 00:24:13.732 ************************************ 00:24:13.732 14:32:50 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:13.732 14:32:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:13.732 14:32:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:13.732 14:32:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.732 ************************************ 00:24:13.732 START TEST dma 00:24:13.732 ************************************ 00:24:13.732 14:32:50 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:13.732 * Looking for test storage... 00:24:13.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.732 14:32:50 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.732 14:32:50 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.732 14:32:50 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.732 14:32:50 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.732 14:32:50 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.732 14:32:50 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.732 14:32:50 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.732 14:32:50 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:13.732 14:32:50 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.732 14:32:50 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.732 14:32:50 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:13.732 14:32:50 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:13.732 00:24:13.732 real 0m0.128s 00:24:13.732 user 0m0.055s 00:24:13.732 sys 0m0.081s 00:24:13.732 14:32:50 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:13.732 14:32:50 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:13.732 ************************************ 00:24:13.732 END TEST dma 00:24:13.733 ************************************ 00:24:13.733 14:32:51 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:13.733 14:32:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:13.733 14:32:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:13.733 14:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.733 ************************************ 00:24:13.733 START TEST nvmf_identify 00:24:13.733 ************************************ 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:13.733 * Looking for test storage... 00:24:13.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.733 14:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:21.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:21.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.881 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:21.882 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:21.882 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.882 14:32:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:24:21.882 00:24:21.882 --- 10.0.0.2 ping statistics --- 00:24:21.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.882 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:24:21.882 00:24:21.882 --- 10.0.0.1 ping statistics --- 00:24:21.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.882 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3129603 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3129603 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 3129603 ']' 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:21.882 14:32:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 [2024-06-10 14:32:58.358265] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:21.882 [2024-06-10 14:32:58.358341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.882 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.882 [2024-06-10 14:32:58.446118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.882 [2024-06-10 14:32:58.543108] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.882 [2024-06-10 14:32:58.543163] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.882 [2024-06-10 14:32:58.543171] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.882 [2024-06-10 14:32:58.543178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.882 [2024-06-10 14:32:58.543184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.882 [2024-06-10 14:32:58.543337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.882 [2024-06-10 14:32:58.543479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.882 [2024-06-10 14:32:58.543624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.882 [2024-06-10 14:32:58.543625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 [2024-06-10 14:32:59.250973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 Malloc0 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.882 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.883 [2024-06-10 14:32:59.348024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.883 [ 00:24:21.883 { 00:24:21.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:21.883 "subtype": "Discovery", 00:24:21.883 "listen_addresses": [ 00:24:21.883 { 00:24:21.883 "trtype": "TCP", 00:24:21.883 "adrfam": "IPv4", 00:24:21.883 "traddr": "10.0.0.2", 00:24:21.883 "trsvcid": "4420" 00:24:21.883 } 00:24:21.883 ], 00:24:21.883 "allow_any_host": true, 00:24:21.883 "hosts": [] 00:24:21.883 }, 00:24:21.883 { 00:24:21.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.883 "subtype": "NVMe", 00:24:21.883 "listen_addresses": [ 00:24:21.883 { 00:24:21.883 "trtype": "TCP", 00:24:21.883 "adrfam": "IPv4", 00:24:21.883 "traddr": "10.0.0.2", 00:24:21.883 "trsvcid": "4420" 00:24:21.883 } 00:24:21.883 ], 00:24:21.883 "allow_any_host": true, 00:24:21.883 "hosts": [], 00:24:21.883 "serial_number": "SPDK00000000000001", 00:24:21.883 "model_number": "SPDK bdev Controller", 00:24:21.883 "max_namespaces": 32, 00:24:21.883 "min_cntlid": 1, 00:24:21.883 "max_cntlid": 65519, 00:24:21.883 "namespaces": [ 00:24:21.883 { 00:24:21.883 "nsid": 1, 00:24:21.883 "bdev_name": "Malloc0", 00:24:21.883 "name": "Malloc0", 00:24:21.883 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:21.883 "eui64": "ABCDEF0123456789", 00:24:21.883 "uuid": "ddf93d5c-d8f4-4565-93a7-7cbb278780b5" 00:24:21.883 } 00:24:21.883 ] 00:24:21.883 } 00:24:21.883 ] 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.883 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:21.883 [2024-06-10 14:32:59.410134] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:21.883 [2024-06-10 14:32:59.410199] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129808 ] 00:24:21.883 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.883 [2024-06-10 14:32:59.442981] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:21.883 [2024-06-10 14:32:59.443033] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:21.883 [2024-06-10 14:32:59.443038] nvme_tcp.c:2337:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:21.883 [2024-06-10 14:32:59.443050] nvme_tcp.c:2355:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:21.883 [2024-06-10 14:32:59.443058] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:21.883 [2024-06-10 14:32:59.446349] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:21.883 [2024-06-10 14:32:59.446380] nvme_tcp.c:1550:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e82ec0 0 00:24:21.883 [2024-06-10 14:32:59.454326] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:21.883 [2024-06-10 14:32:59.454337] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:21.883 [2024-06-10 14:32:59.454342] nvme_tcp.c:1596:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:21.883 [2024-06-10 14:32:59.454345] nvme_tcp.c:1597:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:21.883 [2024-06-10 14:32:59.454379] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.454393] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.454398] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.883 [2024-06-10 14:32:59.454410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:21.883 [2024-06-10 14:32:59.454425] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.883 [2024-06-10 14:32:59.462328] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.883 [2024-06-10 14:32:59.462338] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.883 [2024-06-10 14:32:59.462342] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462346] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.883 [2024-06-10 14:32:59.462357] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:21.883 [2024-06-10 14:32:59.462364] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:21.883 [2024-06-10 14:32:59.462369] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:21.883 [2024-06-10 14:32:59.462383] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462387] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462390] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.883 [2024-06-10 14:32:59.462398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.883 [2024-06-10 14:32:59.462410] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.883 [2024-06-10 14:32:59.462631] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.883 [2024-06-10 14:32:59.462638] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.883 [2024-06-10 14:32:59.462642] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462646] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.883 [2024-06-10 14:32:59.462654] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:21.883 [2024-06-10 14:32:59.462661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:21.883 [2024-06-10 14:32:59.462667] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462671] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462677] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.883 [2024-06-10 14:32:59.462684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.883 [2024-06-10 14:32:59.462694] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.883 [2024-06-10 14:32:59.462897] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.883 [2024-06-10 14:32:59.462903] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.883 [2024-06-10 14:32:59.462907] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462910] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.883 [2024-06-10 14:32:59.462916] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:21.883 [2024-06-10 14:32:59.462923] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:21.883 [2024-06-10 14:32:59.462930] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462933] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.462937] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.883 [2024-06-10 14:32:59.462943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.883 [2024-06-10 14:32:59.462953] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.883 [2024-06-10 14:32:59.463130] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.883 [2024-06-10 14:32:59.463137] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.883 [2024-06-10 14:32:59.463140] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.463144] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.883 [2024-06-10 14:32:59.463150] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:21.883 [2024-06-10 14:32:59.463159] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.463163] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.883 [2024-06-10 14:32:59.463166] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.883 [2024-06-10 14:32:59.463173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.883 [2024-06-10 14:32:59.463182] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.463382] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.463389] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.463392] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463396] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.463401] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:21.884 [2024-06-10 14:32:59.463406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:21.884 [2024-06-10 14:32:59.463413] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:21.884 [2024-06-10 14:32:59.463518] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:21.884 [2024-06-10 14:32:59.463523] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:21.884 [2024-06-10 14:32:59.463533] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463537] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463540] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.463547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.884 [2024-06-10 14:32:59.463557] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.463777] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.463784] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.463787] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463791] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.463796] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:21.884 [2024-06-10 14:32:59.463805] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463808] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.463812] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.463819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.884 [2024-06-10 14:32:59.463828] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.464020] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.464026] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.464029] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464033] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.464038] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:21.884 [2024-06-10 14:32:59.464043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:21.884 [2024-06-10 14:32:59.464062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464071] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464074] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.884 [2024-06-10 14:32:59.464091] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.464336] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.884 [2024-06-10 14:32:59.464343] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.884 [2024-06-10 14:32:59.464346] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464350] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e82ec0): datao=0, datal=4096, cccid=0 00:24:21.884 [2024-06-10 14:32:59.464355] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f05e40) on tqpair(0x1e82ec0): expected_datao=0, payload_size=4096 00:24:21.884 [2024-06-10 14:32:59.464361] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464369] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464373] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464488] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.464494] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.464498] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464501] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.464510] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:21.884 [2024-06-10 14:32:59.464515] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:21.884 [2024-06-10 14:32:59.464521] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:21.884 [2024-06-10 14:32:59.464526] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:21.884 [2024-06-10 14:32:59.464531] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:21.884 [2024-06-10 14:32:59.464535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464543] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464549] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464553] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464557] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.884 [2024-06-10 14:32:59.464574] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.464781] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.464788] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.464791] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464795] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f05e40) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.464803] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464806] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464810] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.884 [2024-06-10 14:32:59.464822] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464825] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464829] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.884 [2024-06-10 14:32:59.464840] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464844] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464847] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.884 [2024-06-10 14:32:59.464861] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464865] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464868] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.884 [2024-06-10 14:32:59.464878] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464888] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:21.884 [2024-06-10 14:32:59.464894] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.464898] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e82ec0) 00:24:21.884 [2024-06-10 14:32:59.464904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.884 [2024-06-10 14:32:59.464915] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05e40, cid 0, qid 0 00:24:21.884 [2024-06-10 14:32:59.464920] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f05fc0, cid 1, qid 0 00:24:21.884 [2024-06-10 14:32:59.464925] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06140, cid 2, qid 0 00:24:21.884 [2024-06-10 14:32:59.464930] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:21.884 [2024-06-10 14:32:59.464934] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06440, cid 4, qid 0 00:24:21.884 [2024-06-10 14:32:59.465150] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.884 [2024-06-10 14:32:59.465158] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.884 [2024-06-10 14:32:59.465161] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.884 [2024-06-10 14:32:59.465165] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f06440) on tqpair=0x1e82ec0 00:24:21.884 [2024-06-10 14:32:59.465170] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:21.884 [2024-06-10 14:32:59.465175] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:21.885 [2024-06-10 14:32:59.465185] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.885 [2024-06-10 14:32:59.465189] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e82ec0) 00:24:21.885 [2024-06-10 14:32:59.465196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.885 [2024-06-10 14:32:59.465205] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06440, cid 4, qid 0 00:24:21.885 [2024-06-10 14:32:59.465418] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.885 [2024-06-10 14:32:59.465425] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.885 [2024-06-10 14:32:59.465428] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.885 [2024-06-10 14:32:59.465432] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e82ec0): datao=0, datal=4096, cccid=4 00:24:21.885 [2024-06-10 14:32:59.465436] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f06440) on tqpair(0x1e82ec0): expected_datao=0, payload_size=4096 00:24:21.885 [2024-06-10 14:32:59.465441] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.885 [2024-06-10 14:32:59.465453] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.885 [2024-06-10 14:32:59.465457] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510326] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.151 [2024-06-10 14:32:59.510336] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.151 [2024-06-10 14:32:59.510340] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510344] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f06440) on tqpair=0x1e82ec0 00:24:22.151 [2024-06-10 14:32:59.510356] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:22.151 [2024-06-10 14:32:59.510379] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510384] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e82ec0) 00:24:22.151 [2024-06-10 14:32:59.510391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.151 [2024-06-10 14:32:59.510398] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510401] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510405] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e82ec0) 00:24:22.151 [2024-06-10 14:32:59.510411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.151 [2024-06-10 14:32:59.510426] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06440, cid 4, qid 0 00:24:22.151 [2024-06-10 14:32:59.510431] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f065c0, cid 5, qid 0 00:24:22.151 [2024-06-10 14:32:59.510700] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.151 [2024-06-10 14:32:59.510706] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.151 [2024-06-10 14:32:59.510710] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510713] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e82ec0): datao=0, datal=1024, cccid=4 00:24:22.151 [2024-06-10 14:32:59.510717] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f06440) on tqpair(0x1e82ec0): expected_datao=0, payload_size=1024 00:24:22.151 [2024-06-10 14:32:59.510722] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510728] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510732] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510737] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.151 [2024-06-10 14:32:59.510743] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.151 [2024-06-10 14:32:59.510746] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.510750] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f065c0) on tqpair=0x1e82ec0 00:24:22.151 [2024-06-10 14:32:59.556324] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.151 [2024-06-10 14:32:59.556334] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.151 [2024-06-10 14:32:59.556337] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556341] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f06440) on tqpair=0x1e82ec0 00:24:22.151 [2024-06-10 14:32:59.556356] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556361] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e82ec0) 00:24:22.151 [2024-06-10 14:32:59.556367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.151 [2024-06-10 14:32:59.556383] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06440, cid 4, qid 0 00:24:22.151 [2024-06-10 14:32:59.556641] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.151 [2024-06-10 14:32:59.556647] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.151 [2024-06-10 14:32:59.556653] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556657] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e82ec0): datao=0, datal=3072, cccid=4 00:24:22.151 [2024-06-10 14:32:59.556661] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f06440) on tqpair(0x1e82ec0): expected_datao=0, payload_size=3072 00:24:22.151 [2024-06-10 14:32:59.556666] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556672] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556676] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556772] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.151 [2024-06-10 14:32:59.556780] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.151 [2024-06-10 14:32:59.556783] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556787] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f06440) on tqpair=0x1e82ec0 00:24:22.151 [2024-06-10 14:32:59.556795] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.556799] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e82ec0) 00:24:22.151 [2024-06-10 14:32:59.556806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.151 [2024-06-10 14:32:59.556819] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f06440, cid 4, qid 0 00:24:22.151 [2024-06-10 14:32:59.557090] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.151 [2024-06-10 14:32:59.557096] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.151 [2024-06-10 14:32:59.557100] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.557103] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e82ec0): datao=0, datal=8, cccid=4 00:24:22.151 [2024-06-10 14:32:59.557107] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f06440) on tqpair(0x1e82ec0): expected_datao=0, payload_size=8 00:24:22.151 [2024-06-10 14:32:59.557112] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.557118] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.557121] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.597544] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.151 [2024-06-10 14:32:59.597553] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.151 [2024-06-10 14:32:59.597557] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.151 [2024-06-10 14:32:59.597560] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f06440) on tqpair=0x1e82ec0 00:24:22.151 ===================================================== 00:24:22.151 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:22.151 ===================================================== 00:24:22.151 Controller Capabilities/Features 00:24:22.151 ================================ 00:24:22.151 Vendor ID: 0000 00:24:22.151 Subsystem Vendor ID: 0000 00:24:22.151 Serial Number: .................... 00:24:22.151 Model Number: ........................................ 00:24:22.151 Firmware Version: 24.09 00:24:22.151 Recommended Arb Burst: 0 00:24:22.151 IEEE OUI Identifier: 00 00 00 00:24:22.151 Multi-path I/O 00:24:22.151 May have multiple subsystem ports: No 00:24:22.151 May have multiple controllers: No 00:24:22.151 Associated with SR-IOV VF: No 00:24:22.151 Max Data Transfer Size: 131072 00:24:22.151 Max Number of Namespaces: 0 00:24:22.151 Max Number of I/O Queues: 1024 00:24:22.151 NVMe Specification Version (VS): 1.3 00:24:22.151 NVMe Specification Version (Identify): 1.3 00:24:22.151 Maximum Queue Entries: 128 00:24:22.151 Contiguous Queues Required: Yes 00:24:22.151 Arbitration Mechanisms Supported 00:24:22.151 Weighted Round Robin: Not Supported 00:24:22.151 Vendor Specific: Not Supported 00:24:22.151 Reset Timeout: 15000 ms 00:24:22.151 Doorbell Stride: 4 bytes 00:24:22.151 NVM Subsystem Reset: Not Supported 00:24:22.151 Command Sets Supported 00:24:22.151 NVM Command Set: Supported 00:24:22.151 Boot Partition: Not Supported 00:24:22.151 Memory Page Size Minimum: 4096 bytes 00:24:22.151 Memory Page Size Maximum: 4096 bytes 00:24:22.151 Persistent Memory Region: Not Supported 00:24:22.151 Optional Asynchronous Events Supported 00:24:22.151 Namespace Attribute Notices: Not Supported 00:24:22.151 Firmware Activation Notices: Not Supported 00:24:22.151 ANA Change Notices: Not Supported 00:24:22.151 PLE Aggregate Log Change Notices: Not Supported 00:24:22.151 LBA Status Info Alert Notices: Not Supported 00:24:22.151 EGE Aggregate Log Change Notices: Not Supported 00:24:22.151 Normal NVM Subsystem Shutdown event: Not Supported 00:24:22.152 Zone Descriptor Change Notices: Not Supported 00:24:22.152 Discovery Log Change Notices: Supported 00:24:22.152 Controller Attributes 00:24:22.152 128-bit Host Identifier: Not Supported 00:24:22.152 Non-Operational Permissive Mode: Not Supported 00:24:22.152 NVM Sets: Not Supported 00:24:22.152 Read Recovery Levels: Not Supported 00:24:22.152 Endurance Groups: Not Supported 00:24:22.152 Predictable Latency Mode: Not Supported 00:24:22.152 Traffic Based Keep ALive: Not Supported 00:24:22.152 Namespace Granularity: Not Supported 00:24:22.152 SQ Associations: Not Supported 00:24:22.152 UUID List: Not Supported 00:24:22.152 Multi-Domain Subsystem: Not Supported 00:24:22.152 Fixed Capacity Management: Not Supported 00:24:22.152 Variable Capacity Management: Not Supported 00:24:22.152 Delete Endurance Group: Not Supported 00:24:22.152 Delete NVM Set: Not Supported 00:24:22.152 Extended LBA Formats Supported: Not Supported 00:24:22.152 Flexible Data Placement Supported: Not Supported 00:24:22.152 00:24:22.152 Controller Memory Buffer Support 00:24:22.152 ================================ 00:24:22.152 Supported: No 00:24:22.152 00:24:22.152 Persistent Memory Region Support 00:24:22.152 ================================ 00:24:22.152 Supported: No 00:24:22.152 00:24:22.152 Admin Command Set Attributes 00:24:22.152 ============================ 00:24:22.152 Security Send/Receive: Not Supported 00:24:22.152 Format NVM: Not Supported 00:24:22.152 Firmware Activate/Download: Not Supported 00:24:22.152 Namespace Management: Not Supported 00:24:22.152 Device Self-Test: Not Supported 00:24:22.152 Directives: Not Supported 00:24:22.152 NVMe-MI: Not Supported 00:24:22.152 Virtualization Management: Not Supported 00:24:22.152 Doorbell Buffer Config: Not Supported 00:24:22.152 Get LBA Status Capability: Not Supported 00:24:22.152 Command & Feature Lockdown Capability: Not Supported 00:24:22.152 Abort Command Limit: 1 00:24:22.152 Async Event Request Limit: 4 00:24:22.152 Number of Firmware Slots: N/A 00:24:22.152 Firmware Slot 1 Read-Only: N/A 00:24:22.152 Firmware Activation Without Reset: N/A 00:24:22.152 Multiple Update Detection Support: N/A 00:24:22.152 Firmware Update Granularity: No Information Provided 00:24:22.152 Per-Namespace SMART Log: No 00:24:22.152 Asymmetric Namespace Access Log Page: Not Supported 00:24:22.152 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:22.152 Command Effects Log Page: Not Supported 00:24:22.152 Get Log Page Extended Data: Supported 00:24:22.152 Telemetry Log Pages: Not Supported 00:24:22.152 Persistent Event Log Pages: Not Supported 00:24:22.152 Supported Log Pages Log Page: May Support 00:24:22.152 Commands Supported & Effects Log Page: Not Supported 00:24:22.152 Feature Identifiers & Effects Log Page:May Support 00:24:22.152 NVMe-MI Commands & Effects Log Page: May Support 00:24:22.152 Data Area 4 for Telemetry Log: Not Supported 00:24:22.152 Error Log Page Entries Supported: 128 00:24:22.152 Keep Alive: Not Supported 00:24:22.152 00:24:22.152 NVM Command Set Attributes 00:24:22.152 ========================== 00:24:22.152 Submission Queue Entry Size 00:24:22.152 Max: 1 00:24:22.152 Min: 1 00:24:22.152 Completion Queue Entry Size 00:24:22.152 Max: 1 00:24:22.152 Min: 1 00:24:22.152 Number of Namespaces: 0 00:24:22.152 Compare Command: Not Supported 00:24:22.152 Write Uncorrectable Command: Not Supported 00:24:22.152 Dataset Management Command: Not Supported 00:24:22.152 Write Zeroes Command: Not Supported 00:24:22.152 Set Features Save Field: Not Supported 00:24:22.152 Reservations: Not Supported 00:24:22.152 Timestamp: Not Supported 00:24:22.152 Copy: Not Supported 00:24:22.152 Volatile Write Cache: Not Present 00:24:22.152 Atomic Write Unit (Normal): 1 00:24:22.152 Atomic Write Unit (PFail): 1 00:24:22.152 Atomic Compare & Write Unit: 1 00:24:22.152 Fused Compare & Write: Supported 00:24:22.152 Scatter-Gather List 00:24:22.152 SGL Command Set: Supported 00:24:22.152 SGL Keyed: Supported 00:24:22.152 SGL Bit Bucket Descriptor: Not Supported 00:24:22.152 SGL Metadata Pointer: Not Supported 00:24:22.152 Oversized SGL: Not Supported 00:24:22.152 SGL Metadata Address: Not Supported 00:24:22.152 SGL Offset: Supported 00:24:22.152 Transport SGL Data Block: Not Supported 00:24:22.152 Replay Protected Memory Block: Not Supported 00:24:22.152 00:24:22.152 Firmware Slot Information 00:24:22.152 ========================= 00:24:22.152 Active slot: 0 00:24:22.152 00:24:22.152 00:24:22.152 Error Log 00:24:22.152 ========= 00:24:22.152 00:24:22.152 Active Namespaces 00:24:22.152 ================= 00:24:22.152 Discovery Log Page 00:24:22.152 ================== 00:24:22.152 Generation Counter: 2 00:24:22.152 Number of Records: 2 00:24:22.152 Record Format: 0 00:24:22.152 00:24:22.152 Discovery Log Entry 0 00:24:22.152 ---------------------- 00:24:22.152 Transport Type: 3 (TCP) 00:24:22.152 Address Family: 1 (IPv4) 00:24:22.152 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:22.152 Entry Flags: 00:24:22.152 Duplicate Returned Information: 1 00:24:22.152 Explicit Persistent Connection Support for Discovery: 1 00:24:22.152 Transport Requirements: 00:24:22.152 Secure Channel: Not Required 00:24:22.152 Port ID: 0 (0x0000) 00:24:22.152 Controller ID: 65535 (0xffff) 00:24:22.152 Admin Max SQ Size: 128 00:24:22.152 Transport Service Identifier: 4420 00:24:22.152 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:22.152 Transport Address: 10.0.0.2 00:24:22.152 Discovery Log Entry 1 00:24:22.152 ---------------------- 00:24:22.152 Transport Type: 3 (TCP) 00:24:22.152 Address Family: 1 (IPv4) 00:24:22.152 Subsystem Type: 2 (NVM Subsystem) 00:24:22.152 Entry Flags: 00:24:22.152 Duplicate Returned Information: 0 00:24:22.152 Explicit Persistent Connection Support for Discovery: 0 00:24:22.152 Transport Requirements: 00:24:22.152 Secure Channel: Not Required 00:24:22.152 Port ID: 0 (0x0000) 00:24:22.152 Controller ID: 65535 (0xffff) 00:24:22.152 Admin Max SQ Size: 128 00:24:22.152 Transport Service Identifier: 4420 00:24:22.152 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:22.152 Transport Address: 10.0.0.2 [2024-06-10 14:32:59.597642] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:22.152 [2024-06-10 14:32:59.597654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.152 [2024-06-10 14:32:59.597661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.152 [2024-06-10 14:32:59.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.152 [2024-06-10 14:32:59.597673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.152 [2024-06-10 14:32:59.597684] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.597687] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.597691] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.152 [2024-06-10 14:32:59.597698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.152 [2024-06-10 14:32:59.597713] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.152 [2024-06-10 14:32:59.597831] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.152 [2024-06-10 14:32:59.597837] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.152 [2024-06-10 14:32:59.597840] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.597844] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.152 [2024-06-10 14:32:59.597851] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.597855] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.597858] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.152 [2024-06-10 14:32:59.597865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.152 [2024-06-10 14:32:59.597878] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.152 [2024-06-10 14:32:59.598070] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.152 [2024-06-10 14:32:59.598076] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.152 [2024-06-10 14:32:59.598080] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.598083] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.152 [2024-06-10 14:32:59.598089] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:22.152 [2024-06-10 14:32:59.598093] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:22.152 [2024-06-10 14:32:59.598102] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.152 [2024-06-10 14:32:59.598106] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598109] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.598116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.598126] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.598283] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.598289] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.598292] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598296] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.598306] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598310] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598321] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.598328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.598338] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.598586] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.598592] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.598595] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598599] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.598609] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598613] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598618] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.598625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.598635] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.598890] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.598896] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.598899] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598903] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.598913] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598917] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.598920] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.598927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.598936] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.599135] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.599142] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.599145] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599149] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.599159] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599163] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599166] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.599173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.599182] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.599393] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.599400] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.599404] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599407] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.599417] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599421] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599424] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.599431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.599441] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.599645] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.599652] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.599655] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599659] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.599669] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599672] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599676] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.599686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.599696] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.599948] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.599954] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.599958] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599961] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.599971] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599975] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.599978] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.599985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.599994] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.600193] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.600199] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.600203] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.600206] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.600216] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.600220] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.600224] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e82ec0) 00:24:22.153 [2024-06-10 14:32:59.600230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.153 [2024-06-10 14:32:59.600240] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f062c0, cid 3, qid 0 00:24:22.153 [2024-06-10 14:32:59.604324] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.604333] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.604336] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.604340] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1f062c0) on tqpair=0x1e82ec0 00:24:22.153 [2024-06-10 14:32:59.604348] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:22.153 00:24:22.153 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:22.153 [2024-06-10 14:32:59.641344] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:22.153 [2024-06-10 14:32:59.641384] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129925 ] 00:24:22.153 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.153 [2024-06-10 14:32:59.673849] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:22.153 [2024-06-10 14:32:59.673892] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:22.153 [2024-06-10 14:32:59.673900] nvme_tcp.c:2337:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:22.153 [2024-06-10 14:32:59.673911] nvme_tcp.c:2355:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:22.153 [2024-06-10 14:32:59.673919] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:22.153 [2024-06-10 14:32:59.677343] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:22.153 [2024-06-10 14:32:59.677370] nvme_tcp.c:1550:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19f9ec0 0 00:24:22.153 [2024-06-10 14:32:59.685324] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:22.153 [2024-06-10 14:32:59.685333] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:22.153 [2024-06-10 14:32:59.685337] nvme_tcp.c:1596:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:22.153 [2024-06-10 14:32:59.685340] nvme_tcp.c:1597:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:22.153 [2024-06-10 14:32:59.685369] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.685374] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.153 [2024-06-10 14:32:59.685378] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.153 [2024-06-10 14:32:59.685389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:22.153 [2024-06-10 14:32:59.685404] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.153 [2024-06-10 14:32:59.692325] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.153 [2024-06-10 14:32:59.692334] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.153 [2024-06-10 14:32:59.692337] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692342] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.692353] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:22.154 [2024-06-10 14:32:59.692360] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:22.154 [2024-06-10 14:32:59.692365] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:22.154 [2024-06-10 14:32:59.692377] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692381] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692385] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.692392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.692404] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.692591] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.692597] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.692601] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692605] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.692612] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:22.154 [2024-06-10 14:32:59.692619] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:22.154 [2024-06-10 14:32:59.692626] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692629] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692633] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.692640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.692652] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.692797] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.692803] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.692807] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692810] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.692816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:22.154 [2024-06-10 14:32:59.692823] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.692830] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692833] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.692837] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.692843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.692853] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.693033] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.693039] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.693043] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693046] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.693052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.693061] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693065] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693068] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.693075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.693084] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.693267] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.693273] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.693276] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693280] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.693285] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:22.154 [2024-06-10 14:32:59.693290] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.693297] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.693402] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:22.154 [2024-06-10 14:32:59.693406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.693413] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693417] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693422] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.693429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.693439] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.693623] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.693629] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.693632] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693636] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.693641] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:22.154 [2024-06-10 14:32:59.693650] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693654] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693657] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.693664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.693673] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.693868] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.693874] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.693877] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693881] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.693886] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:22.154 [2024-06-10 14:32:59.693891] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:22.154 [2024-06-10 14:32:59.693898] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:22.154 [2024-06-10 14:32:59.693905] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:22.154 [2024-06-10 14:32:59.693913] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.693917] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.154 [2024-06-10 14:32:59.693923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.154 [2024-06-10 14:32:59.693933] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.154 [2024-06-10 14:32:59.694137] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.154 [2024-06-10 14:32:59.694143] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.154 [2024-06-10 14:32:59.694147] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694150] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=4096, cccid=0 00:24:22.154 [2024-06-10 14:32:59.694155] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7ce40) on tqpair(0x19f9ec0): expected_datao=0, payload_size=4096 00:24:22.154 [2024-06-10 14:32:59.694159] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694167] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694171] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694319] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.154 [2024-06-10 14:32:59.694326] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.154 [2024-06-10 14:32:59.694329] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694333] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.154 [2024-06-10 14:32:59.694340] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:22.154 [2024-06-10 14:32:59.694345] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:22.154 [2024-06-10 14:32:59.694352] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:22.154 [2024-06-10 14:32:59.694356] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:22.154 [2024-06-10 14:32:59.694361] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:22.154 [2024-06-10 14:32:59.694365] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:22.154 [2024-06-10 14:32:59.694373] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:22.154 [2024-06-10 14:32:59.694379] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.154 [2024-06-10 14:32:59.694383] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694387] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:22.155 [2024-06-10 14:32:59.694404] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.155 [2024-06-10 14:32:59.694553] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.155 [2024-06-10 14:32:59.694559] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.155 [2024-06-10 14:32:59.694563] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694566] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7ce40) on tqpair=0x19f9ec0 00:24:22.155 [2024-06-10 14:32:59.694573] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694577] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694580] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.155 [2024-06-10 14:32:59.694592] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694596] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694599] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.155 [2024-06-10 14:32:59.694611] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694615] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694618] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.155 [2024-06-10 14:32:59.694629] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694633] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694636] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.155 [2024-06-10 14:32:59.694649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.694658] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.694664] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694668] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.155 [2024-06-10 14:32:59.694686] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7ce40, cid 0, qid 0 00:24:22.155 [2024-06-10 14:32:59.694691] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7cfc0, cid 1, qid 0 00:24:22.155 [2024-06-10 14:32:59.694696] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d140, cid 2, qid 0 00:24:22.155 [2024-06-10 14:32:59.694700] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.155 [2024-06-10 14:32:59.694705] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.155 [2024-06-10 14:32:59.694925] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.155 [2024-06-10 14:32:59.694931] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.155 [2024-06-10 14:32:59.694935] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694938] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.155 [2024-06-10 14:32:59.694944] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:22.155 [2024-06-10 14:32:59.694948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.694956] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.694962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.694968] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694971] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.694975] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.694981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:22.155 [2024-06-10 14:32:59.694991] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.155 [2024-06-10 14:32:59.695142] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.155 [2024-06-10 14:32:59.695148] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.155 [2024-06-10 14:32:59.695151] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695155] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.155 [2024-06-10 14:32:59.695207] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.695215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.695222] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695229] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.695236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.155 [2024-06-10 14:32:59.695246] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.155 [2024-06-10 14:32:59.695418] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.155 [2024-06-10 14:32:59.695425] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.155 [2024-06-10 14:32:59.695428] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695432] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=4096, cccid=4 00:24:22.155 [2024-06-10 14:32:59.695436] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d440) on tqpair(0x19f9ec0): expected_datao=0, payload_size=4096 00:24:22.155 [2024-06-10 14:32:59.695440] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695447] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695451] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695707] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.155 [2024-06-10 14:32:59.695713] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.155 [2024-06-10 14:32:59.695717] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695721] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.155 [2024-06-10 14:32:59.695733] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:22.155 [2024-06-10 14:32:59.695741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.695750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:22.155 [2024-06-10 14:32:59.695756] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695760] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.155 [2024-06-10 14:32:59.695766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.155 [2024-06-10 14:32:59.695777] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.155 [2024-06-10 14:32:59.695971] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.155 [2024-06-10 14:32:59.695977] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.155 [2024-06-10 14:32:59.695980] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.155 [2024-06-10 14:32:59.695983] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=4096, cccid=4 00:24:22.156 [2024-06-10 14:32:59.695988] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d440) on tqpair(0x19f9ec0): expected_datao=0, payload_size=4096 00:24:22.156 [2024-06-10 14:32:59.695992] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.696010] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.696013] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.696193] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.696199] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.696203] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.696206] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.696218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.696228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.696235] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.696239] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.696245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.696255] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.156 [2024-06-10 14:32:59.700324] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.156 [2024-06-10 14:32:59.700332] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.156 [2024-06-10 14:32:59.700335] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700339] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=4096, cccid=4 00:24:22.156 [2024-06-10 14:32:59.700343] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d440) on tqpair(0x19f9ec0): expected_datao=0, payload_size=4096 00:24:22.156 [2024-06-10 14:32:59.700347] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700354] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700357] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700363] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.700368] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.700372] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700375] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.700383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700391] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700404] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700409] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700414] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:22.156 [2024-06-10 14:32:59.700418] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:22.156 [2024-06-10 14:32:59.700423] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:22.156 [2024-06-10 14:32:59.700437] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700441] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.700448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.700454] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700458] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700461] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.700469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.156 [2024-06-10 14:32:59.700483] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.156 [2024-06-10 14:32:59.700488] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d5c0, cid 5, qid 0 00:24:22.156 [2024-06-10 14:32:59.700683] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.700690] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.700693] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700697] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.700704] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.700710] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.700713] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700716] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d5c0) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.700726] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700730] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.700736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.700745] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d5c0, cid 5, qid 0 00:24:22.156 [2024-06-10 14:32:59.700936] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.700942] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.700945] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700949] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d5c0) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.700958] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.700962] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.700968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.700977] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d5c0, cid 5, qid 0 00:24:22.156 [2024-06-10 14:32:59.701200] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.701207] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.701210] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701214] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d5c0) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.701223] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701226] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.701233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.701242] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d5c0, cid 5, qid 0 00:24:22.156 [2024-06-10 14:32:59.701428] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.156 [2024-06-10 14:32:59.701435] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.156 [2024-06-10 14:32:59.701438] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701442] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d5c0) on tqpair=0x19f9ec0 00:24:22.156 [2024-06-10 14:32:59.701453] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701459] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.701465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.701472] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701476] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.701482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.701489] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701492] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.701498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.701505] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701508] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19f9ec0) 00:24:22.156 [2024-06-10 14:32:59.701514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.156 [2024-06-10 14:32:59.701525] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d5c0, cid 5, qid 0 00:24:22.156 [2024-06-10 14:32:59.701530] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d440, cid 4, qid 0 00:24:22.156 [2024-06-10 14:32:59.701535] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d740, cid 6, qid 0 00:24:22.156 [2024-06-10 14:32:59.701540] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d8c0, cid 7, qid 0 00:24:22.156 [2024-06-10 14:32:59.701789] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.156 [2024-06-10 14:32:59.701795] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.156 [2024-06-10 14:32:59.701799] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701802] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=8192, cccid=5 00:24:22.156 [2024-06-10 14:32:59.701806] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d5c0) on tqpair(0x19f9ec0): expected_datao=0, payload_size=8192 00:24:22.156 [2024-06-10 14:32:59.701811] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701889] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.156 [2024-06-10 14:32:59.701893] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701899] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.157 [2024-06-10 14:32:59.701905] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.157 [2024-06-10 14:32:59.701908] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701911] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=512, cccid=4 00:24:22.157 [2024-06-10 14:32:59.701916] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d440) on tqpair(0x19f9ec0): expected_datao=0, payload_size=512 00:24:22.157 [2024-06-10 14:32:59.701920] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701926] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701930] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701935] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.157 [2024-06-10 14:32:59.701941] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.157 [2024-06-10 14:32:59.701944] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701947] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=512, cccid=6 00:24:22.157 [2024-06-10 14:32:59.701953] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d740) on tqpair(0x19f9ec0): expected_datao=0, payload_size=512 00:24:22.157 [2024-06-10 14:32:59.701957] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701964] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701967] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701973] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:22.157 [2024-06-10 14:32:59.701978] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:22.157 [2024-06-10 14:32:59.701982] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.701985] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19f9ec0): datao=0, datal=4096, cccid=7 00:24:22.157 [2024-06-10 14:32:59.701989] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a7d8c0) on tqpair(0x19f9ec0): expected_datao=0, payload_size=4096 00:24:22.157 [2024-06-10 14:32:59.701993] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702000] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702003] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702034] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.157 [2024-06-10 14:32:59.702040] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.157 [2024-06-10 14:32:59.702043] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702047] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d5c0) on tqpair=0x19f9ec0 00:24:22.157 [2024-06-10 14:32:59.702059] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.157 [2024-06-10 14:32:59.702065] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.157 [2024-06-10 14:32:59.702069] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702072] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d440) on tqpair=0x19f9ec0 00:24:22.157 [2024-06-10 14:32:59.702081] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.157 [2024-06-10 14:32:59.702087] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.157 [2024-06-10 14:32:59.702090] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702094] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d740) on tqpair=0x19f9ec0 00:24:22.157 [2024-06-10 14:32:59.702103] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.157 [2024-06-10 14:32:59.702108] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.157 [2024-06-10 14:32:59.702112] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.157 [2024-06-10 14:32:59.702115] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d8c0) on tqpair=0x19f9ec0 00:24:22.157 ===================================================== 00:24:22.157 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.157 ===================================================== 00:24:22.157 Controller Capabilities/Features 00:24:22.157 ================================ 00:24:22.157 Vendor ID: 8086 00:24:22.157 Subsystem Vendor ID: 8086 00:24:22.157 Serial Number: SPDK00000000000001 00:24:22.157 Model Number: SPDK bdev Controller 00:24:22.157 Firmware Version: 24.09 00:24:22.157 Recommended Arb Burst: 6 00:24:22.157 IEEE OUI Identifier: e4 d2 5c 00:24:22.157 Multi-path I/O 00:24:22.157 May have multiple subsystem ports: Yes 00:24:22.157 May have multiple controllers: Yes 00:24:22.157 Associated with SR-IOV VF: No 00:24:22.157 Max Data Transfer Size: 131072 00:24:22.157 Max Number of Namespaces: 32 00:24:22.157 Max Number of I/O Queues: 127 00:24:22.157 NVMe Specification Version (VS): 1.3 00:24:22.157 NVMe Specification Version (Identify): 1.3 00:24:22.157 Maximum Queue Entries: 128 00:24:22.157 Contiguous Queues Required: Yes 00:24:22.157 Arbitration Mechanisms Supported 00:24:22.157 Weighted Round Robin: Not Supported 00:24:22.157 Vendor Specific: Not Supported 00:24:22.157 Reset Timeout: 15000 ms 00:24:22.157 Doorbell Stride: 4 bytes 00:24:22.157 NVM Subsystem Reset: Not Supported 00:24:22.157 Command Sets Supported 00:24:22.157 NVM Command Set: Supported 00:24:22.157 Boot Partition: Not Supported 00:24:22.157 Memory Page Size Minimum: 4096 bytes 00:24:22.157 Memory Page Size Maximum: 4096 bytes 00:24:22.157 Persistent Memory Region: Not Supported 00:24:22.157 Optional Asynchronous Events Supported 00:24:22.157 Namespace Attribute Notices: Supported 00:24:22.157 Firmware Activation Notices: Not Supported 00:24:22.157 ANA Change Notices: Not Supported 00:24:22.157 PLE Aggregate Log Change Notices: Not Supported 00:24:22.157 LBA Status Info Alert Notices: Not Supported 00:24:22.157 EGE Aggregate Log Change Notices: Not Supported 00:24:22.157 Normal NVM Subsystem Shutdown event: Not Supported 00:24:22.157 Zone Descriptor Change Notices: Not Supported 00:24:22.157 Discovery Log Change Notices: Not Supported 00:24:22.157 Controller Attributes 00:24:22.157 128-bit Host Identifier: Supported 00:24:22.157 Non-Operational Permissive Mode: Not Supported 00:24:22.157 NVM Sets: Not Supported 00:24:22.157 Read Recovery Levels: Not Supported 00:24:22.157 Endurance Groups: Not Supported 00:24:22.157 Predictable Latency Mode: Not Supported 00:24:22.157 Traffic Based Keep ALive: Not Supported 00:24:22.157 Namespace Granularity: Not Supported 00:24:22.157 SQ Associations: Not Supported 00:24:22.157 UUID List: Not Supported 00:24:22.157 Multi-Domain Subsystem: Not Supported 00:24:22.157 Fixed Capacity Management: Not Supported 00:24:22.157 Variable Capacity Management: Not Supported 00:24:22.157 Delete Endurance Group: Not Supported 00:24:22.157 Delete NVM Set: Not Supported 00:24:22.157 Extended LBA Formats Supported: Not Supported 00:24:22.157 Flexible Data Placement Supported: Not Supported 00:24:22.157 00:24:22.157 Controller Memory Buffer Support 00:24:22.157 ================================ 00:24:22.157 Supported: No 00:24:22.157 00:24:22.157 Persistent Memory Region Support 00:24:22.157 ================================ 00:24:22.157 Supported: No 00:24:22.157 00:24:22.157 Admin Command Set Attributes 00:24:22.157 ============================ 00:24:22.157 Security Send/Receive: Not Supported 00:24:22.157 Format NVM: Not Supported 00:24:22.157 Firmware Activate/Download: Not Supported 00:24:22.157 Namespace Management: Not Supported 00:24:22.157 Device Self-Test: Not Supported 00:24:22.157 Directives: Not Supported 00:24:22.157 NVMe-MI: Not Supported 00:24:22.157 Virtualization Management: Not Supported 00:24:22.157 Doorbell Buffer Config: Not Supported 00:24:22.157 Get LBA Status Capability: Not Supported 00:24:22.157 Command & Feature Lockdown Capability: Not Supported 00:24:22.157 Abort Command Limit: 4 00:24:22.157 Async Event Request Limit: 4 00:24:22.157 Number of Firmware Slots: N/A 00:24:22.157 Firmware Slot 1 Read-Only: N/A 00:24:22.157 Firmware Activation Without Reset: N/A 00:24:22.157 Multiple Update Detection Support: N/A 00:24:22.157 Firmware Update Granularity: No Information Provided 00:24:22.157 Per-Namespace SMART Log: No 00:24:22.157 Asymmetric Namespace Access Log Page: Not Supported 00:24:22.157 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:22.157 Command Effects Log Page: Supported 00:24:22.157 Get Log Page Extended Data: Supported 00:24:22.157 Telemetry Log Pages: Not Supported 00:24:22.157 Persistent Event Log Pages: Not Supported 00:24:22.157 Supported Log Pages Log Page: May Support 00:24:22.157 Commands Supported & Effects Log Page: Not Supported 00:24:22.157 Feature Identifiers & Effects Log Page:May Support 00:24:22.157 NVMe-MI Commands & Effects Log Page: May Support 00:24:22.157 Data Area 4 for Telemetry Log: Not Supported 00:24:22.157 Error Log Page Entries Supported: 128 00:24:22.157 Keep Alive: Supported 00:24:22.157 Keep Alive Granularity: 10000 ms 00:24:22.157 00:24:22.157 NVM Command Set Attributes 00:24:22.157 ========================== 00:24:22.157 Submission Queue Entry Size 00:24:22.157 Max: 64 00:24:22.157 Min: 64 00:24:22.157 Completion Queue Entry Size 00:24:22.157 Max: 16 00:24:22.157 Min: 16 00:24:22.157 Number of Namespaces: 32 00:24:22.157 Compare Command: Supported 00:24:22.157 Write Uncorrectable Command: Not Supported 00:24:22.157 Dataset Management Command: Supported 00:24:22.157 Write Zeroes Command: Supported 00:24:22.157 Set Features Save Field: Not Supported 00:24:22.157 Reservations: Supported 00:24:22.157 Timestamp: Not Supported 00:24:22.158 Copy: Supported 00:24:22.158 Volatile Write Cache: Present 00:24:22.158 Atomic Write Unit (Normal): 1 00:24:22.158 Atomic Write Unit (PFail): 1 00:24:22.158 Atomic Compare & Write Unit: 1 00:24:22.158 Fused Compare & Write: Supported 00:24:22.158 Scatter-Gather List 00:24:22.158 SGL Command Set: Supported 00:24:22.158 SGL Keyed: Supported 00:24:22.158 SGL Bit Bucket Descriptor: Not Supported 00:24:22.158 SGL Metadata Pointer: Not Supported 00:24:22.158 Oversized SGL: Not Supported 00:24:22.158 SGL Metadata Address: Not Supported 00:24:22.158 SGL Offset: Supported 00:24:22.158 Transport SGL Data Block: Not Supported 00:24:22.158 Replay Protected Memory Block: Not Supported 00:24:22.158 00:24:22.158 Firmware Slot Information 00:24:22.158 ========================= 00:24:22.158 Active slot: 1 00:24:22.158 Slot 1 Firmware Revision: 24.09 00:24:22.158 00:24:22.158 00:24:22.158 Commands Supported and Effects 00:24:22.158 ============================== 00:24:22.158 Admin Commands 00:24:22.158 -------------- 00:24:22.158 Get Log Page (02h): Supported 00:24:22.158 Identify (06h): Supported 00:24:22.158 Abort (08h): Supported 00:24:22.158 Set Features (09h): Supported 00:24:22.158 Get Features (0Ah): Supported 00:24:22.158 Asynchronous Event Request (0Ch): Supported 00:24:22.158 Keep Alive (18h): Supported 00:24:22.158 I/O Commands 00:24:22.158 ------------ 00:24:22.158 Flush (00h): Supported LBA-Change 00:24:22.158 Write (01h): Supported LBA-Change 00:24:22.158 Read (02h): Supported 00:24:22.158 Compare (05h): Supported 00:24:22.158 Write Zeroes (08h): Supported LBA-Change 00:24:22.158 Dataset Management (09h): Supported LBA-Change 00:24:22.158 Copy (19h): Supported LBA-Change 00:24:22.158 Unknown (79h): Supported LBA-Change 00:24:22.158 Unknown (7Ah): Supported 00:24:22.158 00:24:22.158 Error Log 00:24:22.158 ========= 00:24:22.158 00:24:22.158 Arbitration 00:24:22.158 =========== 00:24:22.158 Arbitration Burst: 1 00:24:22.158 00:24:22.158 Power Management 00:24:22.158 ================ 00:24:22.158 Number of Power States: 1 00:24:22.158 Current Power State: Power State #0 00:24:22.158 Power State #0: 00:24:22.158 Max Power: 0.00 W 00:24:22.158 Non-Operational State: Operational 00:24:22.158 Entry Latency: Not Reported 00:24:22.158 Exit Latency: Not Reported 00:24:22.158 Relative Read Throughput: 0 00:24:22.158 Relative Read Latency: 0 00:24:22.158 Relative Write Throughput: 0 00:24:22.158 Relative Write Latency: 0 00:24:22.158 Idle Power: Not Reported 00:24:22.158 Active Power: Not Reported 00:24:22.158 Non-Operational Permissive Mode: Not Supported 00:24:22.158 00:24:22.158 Health Information 00:24:22.158 ================== 00:24:22.158 Critical Warnings: 00:24:22.158 Available Spare Space: OK 00:24:22.158 Temperature: OK 00:24:22.158 Device Reliability: OK 00:24:22.158 Read Only: No 00:24:22.158 Volatile Memory Backup: OK 00:24:22.158 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:22.158 Temperature Threshold: [2024-06-10 14:32:59.702213] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702219] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.702225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.702236] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d8c0, cid 7, qid 0 00:24:22.158 [2024-06-10 14:32:59.702405] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.702411] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.702415] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702418] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d8c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.702445] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:22.158 [2024-06-10 14:32:59.702458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.158 [2024-06-10 14:32:59.702464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.158 [2024-06-10 14:32:59.702470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.158 [2024-06-10 14:32:59.702476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.158 [2024-06-10 14:32:59.702484] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702488] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702491] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.702498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.702510] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.702689] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.702695] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.702698] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702702] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.702709] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702712] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702716] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.702722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.702734] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.702909] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.702915] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.702919] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702922] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.702928] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:22.158 [2024-06-10 14:32:59.702932] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:22.158 [2024-06-10 14:32:59.702941] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702945] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.702948] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.702955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.702964] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.703135] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.703141] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.703145] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703148] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.703159] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703162] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703167] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.703174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.703184] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.703397] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.703403] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.703406] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703410] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.703421] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703424] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703428] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.703434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.703444] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.703641] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.703647] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.158 [2024-06-10 14:32:59.703651] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703654] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.158 [2024-06-10 14:32:59.703664] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703668] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.158 [2024-06-10 14:32:59.703671] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.158 [2024-06-10 14:32:59.703678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.158 [2024-06-10 14:32:59.703687] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.158 [2024-06-10 14:32:59.703904] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.158 [2024-06-10 14:32:59.703910] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.159 [2024-06-10 14:32:59.703913] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.703917] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.159 [2024-06-10 14:32:59.703927] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.703931] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.703934] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.159 [2024-06-10 14:32:59.703941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.159 [2024-06-10 14:32:59.703950] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.159 [2024-06-10 14:32:59.704122] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.159 [2024-06-10 14:32:59.704129] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.159 [2024-06-10 14:32:59.704132] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.704136] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.159 [2024-06-10 14:32:59.704146] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.704149] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.704153] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.159 [2024-06-10 14:32:59.704161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.159 [2024-06-10 14:32:59.704171] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.159 [2024-06-10 14:32:59.708326] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.159 [2024-06-10 14:32:59.708334] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.159 [2024-06-10 14:32:59.708337] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.708341] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.159 [2024-06-10 14:32:59.708352] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.708356] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.708359] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19f9ec0) 00:24:22.159 [2024-06-10 14:32:59.708366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.159 [2024-06-10 14:32:59.708376] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a7d2c0, cid 3, qid 0 00:24:22.159 [2024-06-10 14:32:59.708564] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:22.159 [2024-06-10 14:32:59.708571] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:22.159 [2024-06-10 14:32:59.708574] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:22.159 [2024-06-10 14:32:59.708578] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a7d2c0) on tqpair=0x19f9ec0 00:24:22.159 [2024-06-10 14:32:59.708586] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:22.159 0 Kelvin (-273 Celsius) 00:24:22.159 Available Spare: 0% 00:24:22.159 Available Spare Threshold: 0% 00:24:22.159 Life Percentage Used: 0% 00:24:22.159 Data Units Read: 0 00:24:22.159 Data Units Written: 0 00:24:22.159 Host Read Commands: 0 00:24:22.159 Host Write Commands: 0 00:24:22.159 Controller Busy Time: 0 minutes 00:24:22.159 Power Cycles: 0 00:24:22.159 Power On Hours: 0 hours 00:24:22.159 Unsafe Shutdowns: 0 00:24:22.159 Unrecoverable Media Errors: 0 00:24:22.159 Lifetime Error Log Entries: 0 00:24:22.159 Warning Temperature Time: 0 minutes 00:24:22.159 Critical Temperature Time: 0 minutes 00:24:22.159 00:24:22.159 Number of Queues 00:24:22.159 ================ 00:24:22.159 Number of I/O Submission Queues: 127 00:24:22.159 Number of I/O Completion Queues: 127 00:24:22.159 00:24:22.159 Active Namespaces 00:24:22.159 ================= 00:24:22.159 Namespace ID:1 00:24:22.159 Error Recovery Timeout: Unlimited 00:24:22.159 Command Set Identifier: NVM (00h) 00:24:22.159 Deallocate: Supported 00:24:22.159 Deallocated/Unwritten Error: Not Supported 00:24:22.159 Deallocated Read Value: Unknown 00:24:22.159 Deallocate in Write Zeroes: Not Supported 00:24:22.159 Deallocated Guard Field: 0xFFFF 00:24:22.159 Flush: Supported 00:24:22.159 Reservation: Supported 00:24:22.159 Namespace Sharing Capabilities: Multiple Controllers 00:24:22.159 Size (in LBAs): 131072 (0GiB) 00:24:22.159 Capacity (in LBAs): 131072 (0GiB) 00:24:22.159 Utilization (in LBAs): 131072 (0GiB) 00:24:22.159 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:22.159 EUI64: ABCDEF0123456789 00:24:22.159 UUID: ddf93d5c-d8f4-4565-93a7-7cbb278780b5 00:24:22.159 Thin Provisioning: Not Supported 00:24:22.159 Per-NS Atomic Units: Yes 00:24:22.159 Atomic Boundary Size (Normal): 0 00:24:22.159 Atomic Boundary Size (PFail): 0 00:24:22.159 Atomic Boundary Offset: 0 00:24:22.159 Maximum Single Source Range Length: 65535 00:24:22.159 Maximum Copy Length: 65535 00:24:22.159 Maximum Source Range Count: 1 00:24:22.159 NGUID/EUI64 Never Reused: No 00:24:22.159 Namespace Write Protected: No 00:24:22.159 Number of LBA Formats: 1 00:24:22.159 Current LBA Format: LBA Format #00 00:24:22.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:22.159 00:24:22.159 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:22.159 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.159 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.159 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.420 rmmod nvme_tcp 00:24:22.420 rmmod nvme_fabrics 00:24:22.420 rmmod nvme_keyring 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3129603 ']' 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3129603 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 3129603 ']' 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 3129603 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3129603 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3129603' 00:24:22.420 killing process with pid 3129603 00:24:22.420 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 3129603 00:24:22.421 14:32:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 3129603 00:24:22.421 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.421 14:32:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.421 14:33:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.968 14:33:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.968 00:24:24.968 real 0m11.006s 00:24:24.968 user 0m8.021s 00:24:24.968 sys 0m5.682s 00:24:24.968 14:33:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:24.968 14:33:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.968 ************************************ 00:24:24.968 END TEST nvmf_identify 00:24:24.968 ************************************ 00:24:24.968 14:33:02 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:24.968 14:33:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:24.968 14:33:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:24.968 14:33:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.968 ************************************ 00:24:24.968 START TEST nvmf_perf 00:24:24.968 ************************************ 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:24.968 * Looking for test storage... 00:24:24.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.968 14:33:02 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.969 14:33:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:33.112 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:33.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:33.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:33.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:33.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:24:33.113 00:24:33.113 --- 10.0.0.2 ping statistics --- 00:24:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.113 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:33.113 00:24:33.113 --- 10.0.0.1 ping statistics --- 00:24:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.113 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3134291 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3134291 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 3134291 ']' 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:33.113 14:33:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.113 [2024-06-10 14:33:09.614652] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:33.113 [2024-06-10 14:33:09.614712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.113 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.113 [2024-06-10 14:33:09.703042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.113 [2024-06-10 14:33:09.799491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.113 [2024-06-10 14:33:09.799549] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.113 [2024-06-10 14:33:09.799558] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.113 [2024-06-10 14:33:09.799564] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.113 [2024-06-10 14:33:09.799571] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.113 [2024-06-10 14:33:09.799633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.113 [2024-06-10 14:33:09.799761] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.114 [2024-06-10 14:33:09.799928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.114 [2024-06-10 14:33:09.799930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:33.114 14:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:33.684 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:33.684 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:33.945 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.205 [2024-06-10 14:33:11.696904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.205 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.466 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:34.466 14:33:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.727 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:34.727 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:34.988 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.988 [2024-06-10 14:33:12.572266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.250 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:35.250 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:35.250 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:35.250 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:35.250 14:33:12 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:36.673 Initializing NVMe Controllers 00:24:36.673 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:36.673 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:36.673 Initialization complete. Launching workers. 00:24:36.673 ======================================================== 00:24:36.673 Latency(us) 00:24:36.673 Device Information : IOPS MiB/s Average min max 00:24:36.673 PCIE (0000:65:00.0) NSID 1 from core 0: 79078.87 308.90 404.26 13.58 7437.84 00:24:36.673 ======================================================== 00:24:36.673 Total : 79078.87 308.90 404.26 13.58 7437.84 00:24:36.673 00:24:36.673 14:33:14 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.673 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.059 Initializing NVMe Controllers 00:24:38.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.059 Initialization complete. Launching workers. 00:24:38.059 ======================================================== 00:24:38.060 Latency(us) 00:24:38.060 Device Information : IOPS MiB/s Average min max 00:24:38.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.77 0.34 11665.41 224.29 45879.72 00:24:38.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.82 0.26 15083.33 7452.74 47889.43 00:24:38.060 ======================================================== 00:24:38.060 Total : 154.59 0.60 13142.83 224.29 47889.43 00:24:38.060 00:24:38.060 14:33:15 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.060 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.444 Initializing NVMe Controllers 00:24:39.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:39.444 Initialization complete. Launching workers. 00:24:39.444 ======================================================== 00:24:39.444 Latency(us) 00:24:39.444 Device Information : IOPS MiB/s Average min max 00:24:39.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8911.99 34.81 3591.45 479.42 7382.94 00:24:39.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.00 14.90 8434.07 5967.01 16137.29 00:24:39.444 ======================================================== 00:24:39.444 Total : 12726.99 49.71 5043.06 479.42 16137.29 00:24:39.444 00:24:39.444 14:33:16 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:39.444 14:33:16 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:39.444 14:33:16 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.990 Initializing NVMe Controllers 00:24:41.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.990 Controller IO queue size 128, less than required. 00:24:41.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.990 Controller IO queue size 128, less than required. 00:24:41.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.990 Initialization complete. Launching workers. 00:24:41.990 ======================================================== 00:24:41.990 Latency(us) 00:24:41.990 Device Information : IOPS MiB/s Average min max 00:24:41.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1560.25 390.06 83356.37 48297.47 126059.48 00:24:41.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.46 147.12 229415.45 62909.01 355402.76 00:24:41.990 ======================================================== 00:24:41.990 Total : 2148.71 537.18 123357.22 48297.47 355402.76 00:24:41.990 00:24:41.990 14:33:19 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:41.990 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.990 No valid NVMe controllers or AIO or URING devices found 00:24:41.990 Initializing NVMe Controllers 00:24:41.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.990 Controller IO queue size 128, less than required. 00:24:41.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.990 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:41.990 Controller IO queue size 128, less than required. 00:24:41.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.990 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:41.990 WARNING: Some requested NVMe devices were skipped 00:24:41.990 14:33:19 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:41.990 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.537 Initializing NVMe Controllers 00:24:44.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.537 Controller IO queue size 128, less than required. 00:24:44.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.537 Controller IO queue size 128, less than required. 00:24:44.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.537 Initialization complete. Launching workers. 00:24:44.537 00:24:44.537 ==================== 00:24:44.537 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:44.537 TCP transport: 00:24:44.537 polls: 20767 00:24:44.537 idle_polls: 12000 00:24:44.537 sock_completions: 8767 00:24:44.537 nvme_completions: 6083 00:24:44.537 submitted_requests: 9086 00:24:44.537 queued_requests: 1 00:24:44.537 00:24:44.537 ==================== 00:24:44.537 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:44.537 TCP transport: 00:24:44.537 polls: 20593 00:24:44.537 idle_polls: 11030 00:24:44.537 sock_completions: 9563 00:24:44.537 nvme_completions: 8343 00:24:44.537 submitted_requests: 12470 00:24:44.537 queued_requests: 1 00:24:44.537 ======================================================== 00:24:44.537 Latency(us) 00:24:44.537 Device Information : IOPS MiB/s Average min max 00:24:44.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.28 380.07 85826.03 47050.75 164466.45 00:24:44.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2085.20 521.30 61616.82 32401.36 103917.98 00:24:44.537 ======================================================== 00:24:44.537 Total : 3605.48 901.37 71824.83 32401.36 164466.45 00:24:44.537 00:24:44.537 14:33:21 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:44.537 14:33:21 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.797 rmmod nvme_tcp 00:24:44.797 rmmod nvme_fabrics 00:24:44.797 rmmod nvme_keyring 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3134291 ']' 00:24:44.797 14:33:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3134291 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 3134291 ']' 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 3134291 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3134291 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3134291' 00:24:44.798 killing process with pid 3134291 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 3134291 00:24:44.798 14:33:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 3134291 00:24:46.711 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.712 14:33:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.256 14:33:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.256 00:24:49.256 real 0m24.167s 00:24:49.256 user 0m59.202s 00:24:49.256 sys 0m8.343s 00:24:49.256 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:49.256 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.256 ************************************ 00:24:49.256 END TEST nvmf_perf 00:24:49.256 ************************************ 00:24:49.256 14:33:26 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:49.256 14:33:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:49.256 14:33:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:49.256 14:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.256 ************************************ 00:24:49.256 START TEST nvmf_fio_host 00:24:49.256 ************************************ 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:49.256 * Looking for test storage... 00:24:49.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.256 14:33:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.846 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.846 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.847 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:56.108 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:56.108 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.108 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.109 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:24:56.370 00:24:56.370 --- 10.0.0.2 ping statistics --- 00:24:56.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.370 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:24:56.370 00:24:56.370 --- 10.0.0.1 ping statistics --- 00:24:56.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.370 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3141571 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3141571 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 3141571 ']' 00:24:56.370 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.371 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:56.371 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.371 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:56.371 14:33:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.371 [2024-06-10 14:33:33.846048] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:24:56.371 [2024-06-10 14:33:33.846114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.371 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.371 [2024-06-10 14:33:33.932753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.631 [2024-06-10 14:33:34.030262] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.631 [2024-06-10 14:33:34.030324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.631 [2024-06-10 14:33:34.030332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.631 [2024-06-10 14:33:34.030339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.631 [2024-06-10 14:33:34.030345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.631 [2024-06-10 14:33:34.030411] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.631 [2024-06-10 14:33:34.030580] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.631 [2024-06-10 14:33:34.030749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.631 [2024-06-10 14:33:34.030750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.202 14:33:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:57.202 14:33:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:57.202 14:33:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.462 [2024-06-10 14:33:34.920671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.462 14:33:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:57.462 14:33:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:57.462 14:33:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.462 14:33:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:57.723 Malloc1 00:24:57.723 14:33:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.983 14:33:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:58.244 14:33:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.244 [2024-06-10 14:33:35.818835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.505 14:33:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:58.505 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:58.794 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:58.794 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:58.794 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.794 14:33:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:59.058 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:59.058 fio-3.35 00:24:59.058 Starting 1 thread 00:24:59.058 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.624 00:25:01.624 test: (groupid=0, jobs=1): err= 0: pid=3142405: Mon Jun 10 14:33:38 2024 00:25:01.624 read: IOPS=9834, BW=38.4MiB/s (40.3MB/s)(77.1MiB/2006msec) 00:25:01.624 slat (usec): min=2, max=275, avg= 2.20, stdev= 2.77 00:25:01.624 clat (usec): min=3154, max=12259, avg=7159.61, stdev=517.58 00:25:01.624 lat (usec): min=3188, max=12261, avg=7161.81, stdev=517.39 00:25:01.624 clat percentiles (usec): 00:25:01.624 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:25:01.624 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:25:01.624 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 7963], 00:25:01.624 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[10814], 99.95th=[11731], 00:25:01.624 | 99.99th=[12256] 00:25:01.624 bw ( KiB/s): min=38304, max=39960, per=99.93%, avg=39312.00, stdev=742.32, samples=4 00:25:01.624 iops : min= 9576, max= 9990, avg=9828.00, stdev=185.58, samples=4 00:25:01.624 write: IOPS=9847, BW=38.5MiB/s (40.3MB/s)(77.2MiB/2006msec); 0 zone resets 00:25:01.624 slat (usec): min=2, max=268, avg= 2.29, stdev= 2.10 00:25:01.624 clat (usec): min=2894, max=11637, avg=5756.70, stdev=434.62 00:25:01.624 lat (usec): min=2912, max=11639, avg=5758.99, stdev=434.50 00:25:01.624 clat percentiles (usec): 00:25:01.624 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5407], 00:25:01.624 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:25:01.624 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:25:01.624 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 9372], 99.95th=[10683], 00:25:01.624 | 99.99th=[11600] 00:25:01.624 bw ( KiB/s): min=38808, max=40016, per=100.00%, avg=39402.00, stdev=499.49, samples=4 00:25:01.624 iops : min= 9702, max=10004, avg=9850.50, stdev=124.87, samples=4 00:25:01.624 lat (msec) : 4=0.06%, 10=99.83%, 20=0.11% 00:25:01.624 cpu : usr=74.46%, sys=24.29%, ctx=19, majf=0, minf=6 00:25:01.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:01.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:01.624 issued rwts: total=19729,19755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:01.624 00:25:01.624 Run status group 0 (all jobs): 00:25:01.624 READ: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=77.1MiB (80.8MB), run=2006-2006msec 00:25:01.624 WRITE: bw=38.5MiB/s (40.3MB/s), 38.5MiB/s-38.5MiB/s (40.3MB/s-40.3MB/s), io=77.2MiB (80.9MB), run=2006-2006msec 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:01.624 14:33:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.888 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:01.888 fio-3.35 00:25:01.888 Starting 1 thread 00:25:01.888 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.465 00:25:04.465 test: (groupid=0, jobs=1): err= 0: pid=3142939: Mon Jun 10 14:33:41 2024 00:25:04.465 read: IOPS=8888, BW=139MiB/s (146MB/s)(284MiB/2048msec) 00:25:04.465 slat (usec): min=3, max=111, avg= 3.65, stdev= 1.63 00:25:04.465 clat (usec): min=2245, max=52362, avg=8680.29, stdev=3909.95 00:25:04.465 lat (usec): min=2249, max=52366, avg=8683.94, stdev=3910.02 00:25:04.465 clat percentiles (usec): 00:25:04.465 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6587], 00:25:04.465 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8848], 00:25:04.465 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[11731], 00:25:04.465 | 99.00th=[14746], 99.50th=[46400], 99.90th=[52167], 99.95th=[52167], 00:25:04.465 | 99.99th=[52167] 00:25:04.465 bw ( KiB/s): min=63232, max=80544, per=50.46%, avg=71760.00, stdev=7129.73, samples=4 00:25:04.465 iops : min= 3952, max= 5034, avg=4485.00, stdev=445.61, samples=4 00:25:04.465 write: IOPS=5120, BW=80.0MiB/s (83.9MB/s)(146MiB/1829msec); 0 zone resets 00:25:04.465 slat (usec): min=40, max=400, avg=41.26, stdev= 8.18 00:25:04.465 clat (usec): min=3598, max=55150, avg=9975.56, stdev=4896.86 00:25:04.465 lat (usec): min=3638, max=55190, avg=10016.82, stdev=4897.33 00:25:04.465 clat percentiles (usec): 00:25:04.465 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8094], 00:25:04.465 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:04.465 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12911], 00:25:04.465 | 99.00th=[46400], 99.50th=[50594], 99.90th=[54264], 99.95th=[54789], 00:25:04.465 | 99.99th=[55313] 00:25:04.465 bw ( KiB/s): min=65056, max=83904, per=90.92%, avg=74496.00, stdev=7697.89, samples=4 00:25:04.465 iops : min= 4066, max= 5244, avg=4656.00, stdev=481.12, samples=4 00:25:04.465 lat (msec) : 4=0.36%, 10=73.41%, 20=25.31%, 50=0.54%, 100=0.38% 00:25:04.465 cpu : usr=83.54%, sys=14.80%, ctx=24, majf=0, minf=13 00:25:04.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:04.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.465 issued rwts: total=18204,9366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.465 00:25:04.465 Run status group 0 (all jobs): 00:25:04.465 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=284MiB (298MB), run=2048-2048msec 00:25:04.465 WRITE: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=146MiB (153MB), run=1829-1829msec 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:04.465 rmmod nvme_tcp 00:25:04.465 rmmod nvme_fabrics 00:25:04.465 rmmod nvme_keyring 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3141571 ']' 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3141571 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 3141571 ']' 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 3141571 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:04.465 14:33:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3141571 00:25:04.465 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:04.465 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:04.465 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3141571' 00:25:04.465 killing process with pid 3141571 00:25:04.465 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 3141571 00:25:04.465 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 3141571 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.726 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.641 14:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.902 00:25:06.902 real 0m17.841s 00:25:06.902 user 1m13.539s 00:25:06.902 sys 0m7.424s 00:25:06.902 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:06.902 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.902 ************************************ 00:25:06.902 END TEST nvmf_fio_host 00:25:06.902 ************************************ 00:25:06.902 14:33:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.902 14:33:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:06.902 14:33:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:06.902 14:33:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:06.902 ************************************ 00:25:06.902 START TEST nvmf_failover 00:25:06.902 ************************************ 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.902 * Looking for test storage... 00:25:06.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.902 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.903 14:33:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:13.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:13.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.494 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:13.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:13.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:13.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:25:13.495 00:25:13.495 --- 10.0.0.2 ping statistics --- 00:25:13.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.495 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:25:13.495 00:25:13.495 --- 10.0.0.1 ping statistics --- 00:25:13.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.495 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3147578 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3147578 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3147578 ']' 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:13.495 14:33:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.495 [2024-06-10 14:33:51.001388] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:25:13.495 [2024-06-10 14:33:51.001443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.495 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.495 [2024-06-10 14:33:51.068059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.755 [2024-06-10 14:33:51.138785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.755 [2024-06-10 14:33:51.138818] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.755 [2024-06-10 14:33:51.138826] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.755 [2024-06-10 14:33:51.138832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.755 [2024-06-10 14:33:51.138837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.755 [2024-06-10 14:33:51.138939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.755 [2024-06-10 14:33:51.139096] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.755 [2024-06-10 14:33:51.139097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.755 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.015 [2024-06-10 14:33:51.453145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.016 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.275 Malloc0 00:25:14.276 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.276 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.536 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.536 [2024-06-10 14:33:52.125703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.796 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:14.796 [2024-06-10 14:33:52.342391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:14.796 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:15.056 [2024-06-10 14:33:52.510839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3147893 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3147893 /var/tmp/bdevperf.sock 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3147893 ']' 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:15.056 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.996 14:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:15.997 14:33:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:15.997 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.257 NVMe0n1 00:25:16.257 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.517 00:25:16.517 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3148073 00:25:16.517 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:16.517 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.458 14:33:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.718 [2024-06-10 14:33:55.095455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095519] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 [2024-06-10 14:33:55.095528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9a9e0 is same with the state(5) to be set 00:25:17.718 14:33:55 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:21.015 14:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.015 00:25:21.015 14:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.277 [2024-06-10 14:33:58.747284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747339] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.277 [2024-06-10 14:33:58.747377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747402] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747408] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747501] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747519] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747643] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747722] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747736] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747812] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.278 [2024-06-10 14:33:58.747921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.747995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 [2024-06-10 14:33:58.748131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9c0e0 is same with the state(5) to be set 00:25:21.279 14:33:58 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:24.580 14:34:01 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.580 [2024-06-10 14:34:01.916950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.580 14:34:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:25.522 14:34:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:25.783 14:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3148073 00:25:32.435 0 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3147893 ']' 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3147893' 00:25:32.435 killing process with pid 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3147893 00:25:32.435 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.435 [2024-06-10 14:33:52.589572] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:25:32.435 [2024-06-10 14:33:52.589632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147893 ] 00:25:32.435 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.435 [2024-06-10 14:33:52.667032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.435 [2024-06-10 14:33:52.735884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.435 Running I/O for 15 seconds... 00:25:32.435 [2024-06-10 14:33:55.098025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.435 [2024-06-10 14:33:55.098061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.435 [2024-06-10 14:33:55.098539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.435 [2024-06-10 14:33:55.098546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.436 [2024-06-10 14:33:55.098977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.098997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:25:32.436 [2024-06-10 14:33:55.099170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.436 [2024-06-10 14:33:55.099177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.436 [2024-06-10 14:33:55.099183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.436 [2024-06-10 14:33:55.099189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.437 [2024-06-10 14:33:55.099778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.437 [2024-06-10 14:33:55.099784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:25:32.437 [2024-06-10 14:33:55.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.437 [2024-06-10 14:33:55.099799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.099980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.099985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.099991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.099997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.100271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.100277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.100287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.100295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.110208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.110237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.110248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.110260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.110266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.110272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.110279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.110286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.110291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.110298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95736 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.110306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.110322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.438 [2024-06-10 14:33:55.110328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.438 [2024-06-10 14:33:55.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95744 len:8 PRP1 0x0 PRP2 0x0 00:25:32.438 [2024-06-10 14:33:55.110341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.438 [2024-06-10 14:33:55.110348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94944 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94952 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94960 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94968 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94976 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94984 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.439 [2024-06-10 14:33:55.110805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.439 [2024-06-10 14:33:55.110811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94992 len:8 PRP1 0x0 PRP2 0x0 00:25:32.439 [2024-06-10 14:33:55.110818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.439 [2024-06-10 14:33:55.110856] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x179f2d0 was disconnected and freed. reset controller. 00:25:32.439 [2024-06-10 14:33:55.110866] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:32.439 [2024-06-10 14:33:55.110891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.439 [2024-06-10 14:33:55.110900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:55.110912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.440 [2024-06-10 14:33:55.110919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:55.110927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.440 [2024-06-10 14:33:55.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:55.110942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.440 [2024-06-10 14:33:55.110949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:55.110956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.440 [2024-06-10 14:33:55.111000] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1779140 (9): Bad file descriptor 00:25:32.440 [2024-06-10 14:33:55.114513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.440 [2024-06-10 14:33:55.147280] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:32.440 [2024-06-10 14:33:58.749812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.749989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.749997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.440 [2024-06-10 14:33:58.750417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.440 [2024-06-10 14:33:58.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.441 [2024-06-10 14:33:58.750684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.750987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.751005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.751014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.751021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.751030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.751037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.751046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.441 [2024-06-10 14:33:58.751053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.441 [2024-06-10 14:33:58.751062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.442 [2024-06-10 14:33:58.751457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107288 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107296 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107304 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107312 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107320 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107328 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107336 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107344 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.442 [2024-06-10 14:33:58.751695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107352 len:8 PRP1 0x0 PRP2 0x0 00:25:32.442 [2024-06-10 14:33:58.751702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.442 [2024-06-10 14:33:58.751709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.442 [2024-06-10 14:33:58.751716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107360 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107368 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107376 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107384 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107392 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107400 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107408 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107416 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107424 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107432 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.751978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107440 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.751985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.751992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.751997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107448 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107456 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107464 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107472 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107480 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107488 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.763977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107504 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.763990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.763997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.443 [2024-06-10 14:33:58.764003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.443 [2024-06-10 14:33:58.764009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107512 len:8 PRP1 0x0 PRP2 0x0 00:25:32.443 [2024-06-10 14:33:58.764016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.764057] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1943d40 was disconnected and freed. reset controller. 00:25:32.443 [2024-06-10 14:33:58.764066] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:32.443 [2024-06-10 14:33:58.764092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.443 [2024-06-10 14:33:58.764101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.764110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.443 [2024-06-10 14:33:58.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.764125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.443 [2024-06-10 14:33:58.764132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.764141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.443 [2024-06-10 14:33:58.764149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.443 [2024-06-10 14:33:58.764156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.443 [2024-06-10 14:33:58.764183] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1779140 (9): Bad file descriptor 00:25:32.443 [2024-06-10 14:33:58.767725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.443 [2024-06-10 14:33:58.807276] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:32.443 [2024-06-10 14:34:03.098277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.444 [2024-06-10 14:34:03.098323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.444 [2024-06-10 14:34:03.098891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.444 [2024-06-10 14:34:03.098900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.098986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.098995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.445 [2024-06-10 14:34:03.099341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33720 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33728 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33736 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33744 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33752 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33760 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33768 len:8 PRP1 0x0 PRP2 0x0 00:25:32.445 [2024-06-10 14:34:03.099540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.445 [2024-06-10 14:34:03.099548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.445 [2024-06-10 14:34:03.099553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.445 [2024-06-10 14:34:03.099559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33776 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33784 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33800 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33808 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33816 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33824 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33888 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33896 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.099977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33904 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.099984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.099993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.099999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33912 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33920 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33936 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33944 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33952 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.446 [2024-06-10 14:34:03.100153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.446 [2024-06-10 14:34:03.100160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33960 len:8 PRP1 0x0 PRP2 0x0 00:25:32.446 [2024-06-10 14:34:03.100166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.446 [2024-06-10 14:34:03.100174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33968 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33976 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33984 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33992 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34000 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34008 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34016 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34024 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34040 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34048 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34056 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34064 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34072 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.100554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34080 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.100566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.100573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.100579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34088 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34096 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34104 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34112 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34120 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34128 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34136 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34144 len:8 PRP1 0x0 PRP2 0x0 00:25:32.447 [2024-06-10 14:34:03.111470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.447 [2024-06-10 14:34:03.111478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.447 [2024-06-10 14:34:03.111484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.447 [2024-06-10 14:34:03.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34152 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34168 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34176 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34184 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33176 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33184 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33192 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33200 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33208 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33216 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.448 [2024-06-10 14:34:03.111772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.448 [2024-06-10 14:34:03.111777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33224 len:8 PRP1 0x0 PRP2 0x0 00:25:32.448 [2024-06-10 14:34:03.111785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111824] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1943b30 was disconnected and freed. reset controller. 00:25:32.448 [2024-06-10 14:34:03.111834] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:32.448 [2024-06-10 14:34:03.111865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.448 [2024-06-10 14:34:03.111873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.448 [2024-06-10 14:34:03.111891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.448 [2024-06-10 14:34:03.111906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.448 [2024-06-10 14:34:03.111922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.448 [2024-06-10 14:34:03.111929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.448 [2024-06-10 14:34:03.111956] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1779140 (9): Bad file descriptor 00:25:32.448 [2024-06-10 14:34:03.115466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.448 [2024-06-10 14:34:03.157837] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:32.448 00:25:32.448 Latency(us) 00:25:32.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.448 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.448 Verification LBA range: start 0x0 length 0x4000 00:25:32.448 NVMe0n1 : 15.01 9176.50 35.85 273.24 0.00 13516.70 518.83 25231.36 00:25:32.448 =================================================================================================================== 00:25:32.448 Total : 9176.50 35.85 273.24 0.00 13516.70 518.83 25231.36 00:25:32.448 Received shutdown signal, test time was about 15.000000 seconds 00:25:32.448 00:25:32.448 Latency(us) 00:25:32.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.448 =================================================================================================================== 00:25:32.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3150991 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3150991 /var/tmp/bdevperf.sock 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:32.448 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 3150991 ']' 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.449 [2024-06-10 14:34:09.708939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.449 [2024-06-10 14:34:09.873348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:32.449 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.710 NVMe0n1 00:25:32.710 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.971 00:25:32.971 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.231 00:25:33.231 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.231 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:33.492 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.492 14:34:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:36.866 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.866 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:36.866 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.866 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3152000 00:25:36.866 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3152000 00:25:37.809 0 00:25:37.809 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.809 [2024-06-10 14:34:09.344489] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:25:37.809 [2024-06-10 14:34:09.344544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150991 ] 00:25:37.809 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.809 [2024-06-10 14:34:09.418928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.809 [2024-06-10 14:34:09.482680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.809 [2024-06-10 14:34:10.974343] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:37.809 [2024-06-10 14:34:10.974388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-06-10 14:34:10.974399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-06-10 14:34:10.974409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-06-10 14:34:10.974417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-06-10 14:34:10.974425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-06-10 14:34:10.974432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-06-10 14:34:10.974440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-06-10 14:34:10.974447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-06-10 14:34:10.974454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.809 [2024-06-10 14:34:10.974480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.809 [2024-06-10 14:34:10.974495] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2140 (9): Bad file descriptor 00:25:37.809 [2024-06-10 14:34:11.026378] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.809 Running I/O for 1 seconds... 00:25:37.809 00:25:37.809 Latency(us) 00:25:37.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.810 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.810 Verification LBA range: start 0x0 length 0x4000 00:25:37.810 NVMe0n1 : 1.01 9094.38 35.52 0.00 0.00 14016.91 2375.68 15728.64 00:25:37.810 =================================================================================================================== 00:25:37.810 Total : 9094.38 35.52 0.00 0.00 14016.91 2375.68 15728.64 00:25:37.810 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.810 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:38.072 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.333 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.333 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:38.333 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.594 14:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3150991 ']' 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3150991' 00:25:41.898 killing process with pid 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3150991 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:41.898 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.159 rmmod nvme_tcp 00:25:42.159 rmmod nvme_fabrics 00:25:42.159 rmmod nvme_keyring 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3147578 ']' 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3147578 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 3147578 ']' 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 3147578 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3147578 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3147578' 00:25:42.159 killing process with pid 3147578 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 3147578 00:25:42.159 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 3147578 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.421 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.966 14:34:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.966 00:25:44.966 real 0m37.644s 00:25:44.966 user 1m57.739s 00:25:44.966 sys 0m7.548s 00:25:44.966 14:34:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:44.966 14:34:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.966 ************************************ 00:25:44.966 END TEST nvmf_failover 00:25:44.966 ************************************ 00:25:44.966 14:34:21 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.966 14:34:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:44.966 14:34:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:44.966 14:34:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.966 ************************************ 00:25:44.966 START TEST nvmf_host_discovery 00:25:44.966 ************************************ 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.966 * Looking for test storage... 00:25:44.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.966 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.568 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:51.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:51.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:51.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:51.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.569 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:25:51.831 00:25:51.831 --- 10.0.0.2 ping statistics --- 00:25:51.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.831 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:25:51.831 00:25:51.831 --- 10.0.0.1 ping statistics --- 00:25:51.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.831 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:51.831 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3157323 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3157323 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 3157323 ']' 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:52.093 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.093 [2024-06-10 14:34:29.481405] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:25:52.093 [2024-06-10 14:34:29.481466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.093 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.093 [2024-06-10 14:34:29.550110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.093 [2024-06-10 14:34:29.622380] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.093 [2024-06-10 14:34:29.622428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.093 [2024-06-10 14:34:29.622436] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.093 [2024-06-10 14:34:29.622442] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.093 [2024-06-10 14:34:29.622448] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.093 [2024-06-10 14:34:29.622466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 [2024-06-10 14:34:29.755926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 [2024-06-10 14:34:29.768080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 null0 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 null1 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3157343 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3157343 /tmp/host.sock 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 3157343 ']' 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:52.355 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:52.355 14:34:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.355 [2024-06-10 14:34:29.854953] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:25:52.355 [2024-06-10 14:34:29.854999] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157343 ] 00:25:52.355 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.355 [2024-06-10 14:34:29.928598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.616 [2024-06-10 14:34:29.993082] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.616 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 [2024-06-10 14:34:30.445811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.878 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.139 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.140 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.140 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:25:53.140 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:53.710 [2024-06-10 14:34:31.152529] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.711 [2024-06-10 14:34:31.152551] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.711 [2024-06-10 14:34:31.152565] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.711 [2024-06-10 14:34:31.238848] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:53.971 [2024-06-10 14:34:31.418563] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:53.971 [2024-06-10 14:34:31.418586] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:54.232 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 [2024-06-10 14:34:31.990010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:54.494 [2024-06-10 14:34:31.990506] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:54.494 [2024-06-10 14:34:31.990533] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.494 [2024-06-10 14:34:32.078804] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:54.494 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.754 [2024-06-10 14:34:32.136395] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.754 [2024-06-10 14:34:32.136413] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.754 [2024-06-10 14:34:32.136419] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:54.754 14:34:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.765 [2024-06-10 14:34:33.269747] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:55.765 [2024-06-10 14:34:33.269772] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.765 [2024-06-10 14:34:33.270378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.765 [2024-06-10 14:34:33.270395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.765 [2024-06-10 14:34:33.270404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.765 [2024-06-10 14:34:33.270412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.765 [2024-06-10 14:34:33.270419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.765 [2024-06-10 14:34:33.270426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.765 [2024-06-10 14:34:33.270434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.765 [2024-06-10 14:34:33.270441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.765 [2024-06-10 14:34:33.270448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.765 [2024-06-10 14:34:33.280391] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.765 [2024-06-10 14:34:33.290434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.765 [2024-06-10 14:34:33.290761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-06-10 14:34:33.290776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.765 [2024-06-10 14:34:33.290785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.765 [2024-06-10 14:34:33.290796] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.765 [2024-06-10 14:34:33.290815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.765 [2024-06-10 14:34:33.290825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.765 [2024-06-10 14:34:33.290832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.765 [2024-06-10 14:34:33.290844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.765 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.765 [2024-06-10 14:34:33.300488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.765 [2024-06-10 14:34:33.300786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-06-10 14:34:33.300797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.765 [2024-06-10 14:34:33.300805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.765 [2024-06-10 14:34:33.300816] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.765 [2024-06-10 14:34:33.300826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.765 [2024-06-10 14:34:33.300832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.765 [2024-06-10 14:34:33.300839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.765 [2024-06-10 14:34:33.300858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.765 [2024-06-10 14:34:33.310541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.765 [2024-06-10 14:34:33.310842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.765 [2024-06-10 14:34:33.310853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.765 [2024-06-10 14:34:33.310860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.766 [2024-06-10 14:34:33.310875] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.766 [2024-06-10 14:34:33.310892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.766 [2024-06-10 14:34:33.310901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.766 [2024-06-10 14:34:33.310908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.766 [2024-06-10 14:34:33.310919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.766 [2024-06-10 14:34:33.320593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.766 [2024-06-10 14:34:33.320898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-06-10 14:34:33.320910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.766 [2024-06-10 14:34:33.320918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.766 [2024-06-10 14:34:33.320929] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.766 [2024-06-10 14:34:33.320948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.766 [2024-06-10 14:34:33.320957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.766 [2024-06-10 14:34:33.320964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.766 [2024-06-10 14:34:33.320975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.766 [2024-06-10 14:34:33.330648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.766 [2024-06-10 14:34:33.330949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-06-10 14:34:33.330961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.766 [2024-06-10 14:34:33.330968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.766 [2024-06-10 14:34:33.330979] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.766 [2024-06-10 14:34:33.330996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.766 [2024-06-10 14:34:33.331004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.766 [2024-06-10 14:34:33.331011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.766 [2024-06-10 14:34:33.331021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.766 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.766 [2024-06-10 14:34:33.340699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.766 [2024-06-10 14:34:33.341003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-06-10 14:34:33.341014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.766 [2024-06-10 14:34:33.341021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.766 [2024-06-10 14:34:33.341031] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.766 [2024-06-10 14:34:33.341050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.766 [2024-06-10 14:34:33.341059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.766 [2024-06-10 14:34:33.341065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.766 [2024-06-10 14:34:33.341075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:55.766 [2024-06-10 14:34:33.350751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:55.766 [2024-06-10 14:34:33.350982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.766 [2024-06-10 14:34:33.351000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:55.766 [2024-06-10 14:34:33.351008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:55.766 [2024-06-10 14:34:33.351020] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:55.766 [2024-06-10 14:34:33.351030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:55.766 [2024-06-10 14:34:33.351036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:55.766 [2024-06-10 14:34:33.351043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:55.766 [2024-06-10 14:34:33.351053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.027 [2024-06-10 14:34:33.360805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.027 [2024-06-10 14:34:33.361113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-06-10 14:34:33.361124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:56.027 [2024-06-10 14:34:33.361132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:56.027 [2024-06-10 14:34:33.361142] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:56.027 [2024-06-10 14:34:33.361153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:56.027 [2024-06-10 14:34:33.361159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:56.027 [2024-06-10 14:34:33.361165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:56.027 [2024-06-10 14:34:33.361176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.027 [2024-06-10 14:34:33.370858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.027 [2024-06-10 14:34:33.371162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-06-10 14:34:33.371173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:56.027 [2024-06-10 14:34:33.371180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:56.027 [2024-06-10 14:34:33.371190] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:56.027 [2024-06-10 14:34:33.371200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:56.027 [2024-06-10 14:34:33.371206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:56.027 [2024-06-10 14:34:33.371213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:56.027 [2024-06-10 14:34:33.371223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.027 [2024-06-10 14:34:33.380910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.027 [2024-06-10 14:34:33.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-06-10 14:34:33.381223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:56.027 [2024-06-10 14:34:33.381230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:56.027 [2024-06-10 14:34:33.381240] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:56.027 [2024-06-10 14:34:33.381250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:56.027 [2024-06-10 14:34:33.381256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:56.027 [2024-06-10 14:34:33.381263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:56.027 [2024-06-10 14:34:33.381273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.027 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.027 [2024-06-10 14:34:33.390959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.027 [2024-06-10 14:34:33.391159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.027 [2024-06-10 14:34:33.391171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217ab90 with addr=10.0.0.2, port=4420 00:25:56.027 [2024-06-10 14:34:33.391178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ab90 is same with the state(5) to be set 00:25:56.028 [2024-06-10 14:34:33.391188] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217ab90 (9): Bad file descriptor 00:25:56.028 [2024-06-10 14:34:33.391198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:56.028 [2024-06-10 14:34:33.391204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:56.028 [2024-06-10 14:34:33.391211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:56.028 [2024-06-10 14:34:33.391221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.028 [2024-06-10 14:34:33.400356] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:56.028 [2024-06-10 14:34:33.400376] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.028 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.028 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:56.028 14:34:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:56.972 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:57.232 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.233 14:34:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.613 [2024-06-10 14:34:35.776525] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:58.613 [2024-06-10 14:34:35.776544] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:58.613 [2024-06-10 14:34:35.776556] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.613 [2024-06-10 14:34:35.864826] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:58.613 [2024-06-10 14:34:35.970613] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.613 [2024-06-10 14:34:35.970642] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.613 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.613 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.613 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:58.613 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 request: 00:25:58.614 { 00:25:58.614 "name": "nvme", 00:25:58.614 "trtype": "tcp", 00:25:58.614 "traddr": "10.0.0.2", 00:25:58.614 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.614 "adrfam": "ipv4", 00:25:58.614 "trsvcid": "8009", 00:25:58.614 "wait_for_attach": true, 00:25:58.614 "method": "bdev_nvme_start_discovery", 00:25:58.614 "req_id": 1 00:25:58.614 } 00:25:58.614 Got JSON-RPC error response 00:25:58.614 response: 00:25:58.614 { 00:25:58.614 "code": -17, 00:25:58.614 "message": "File exists" 00:25:58.614 } 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 14:34:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 request: 00:25:58.614 { 00:25:58.614 "name": "nvme_second", 00:25:58.614 "trtype": "tcp", 00:25:58.614 "traddr": "10.0.0.2", 00:25:58.614 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.614 "adrfam": "ipv4", 00:25:58.614 "trsvcid": "8009", 00:25:58.614 "wait_for_attach": true, 00:25:58.614 "method": "bdev_nvme_start_discovery", 00:25:58.614 "req_id": 1 00:25:58.614 } 00:25:58.614 Got JSON-RPC error response 00:25:58.614 response: 00:25:58.614 { 00:25:58.614 "code": -17, 00:25:58.614 "message": "File exists" 00:25:58.614 } 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.614 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.875 14:34:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.816 [2024-06-10 14:34:37.235526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.816 [2024-06-10 14:34:37.235554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2178640 with addr=10.0.0.2, port=8010 00:25:59.816 [2024-06-10 14:34:37.235567] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:59.816 [2024-06-10 14:34:37.235574] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:59.816 [2024-06-10 14:34:37.235580] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:00.808 [2024-06-10 14:34:38.238004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.808 [2024-06-10 14:34:38.238031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2178640 with addr=10.0.0.2, port=8010 00:26:00.808 [2024-06-10 14:34:38.238043] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:00.808 [2024-06-10 14:34:38.238050] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:00.808 [2024-06-10 14:34:38.238056] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.750 [2024-06-10 14:34:39.239993] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:01.750 request: 00:26:01.750 { 00:26:01.750 "name": "nvme_second", 00:26:01.750 "trtype": "tcp", 00:26:01.750 "traddr": "10.0.0.2", 00:26:01.750 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.750 "adrfam": "ipv4", 00:26:01.750 "trsvcid": "8010", 00:26:01.750 "attach_timeout_ms": 3000, 00:26:01.750 "method": "bdev_nvme_start_discovery", 00:26:01.750 "req_id": 1 00:26:01.750 } 00:26:01.750 Got JSON-RPC error response 00:26:01.750 response: 00:26:01.750 { 00:26:01.750 "code": -110, 00:26:01.750 "message": "Connection timed out" 00:26:01.750 } 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3157343 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.750 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.750 rmmod nvme_tcp 00:26:01.750 rmmod nvme_fabrics 00:26:02.013 rmmod nvme_keyring 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3157323 ']' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 3157323 ']' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3157323' 00:26:02.013 killing process with pid 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 3157323 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.013 14:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.557 00:26:04.557 real 0m19.602s 00:26:04.557 user 0m23.368s 00:26:04.557 sys 0m6.685s 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.557 ************************************ 00:26:04.557 END TEST nvmf_host_discovery 00:26:04.557 ************************************ 00:26:04.557 14:34:41 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.557 14:34:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:04.557 14:34:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:04.557 14:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.557 ************************************ 00:26:04.557 START TEST nvmf_host_multipath_status 00:26:04.557 ************************************ 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.557 * Looking for test storage... 00:26:04.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.557 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.558 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:04.558 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:04.558 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.558 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:11.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.142 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:11.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:11.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:11.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.143 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:11.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:26:11.404 00:26:11.404 --- 10.0.0.2 ping statistics --- 00:26:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.404 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:26:11.404 00:26:11.404 --- 10.0.0.1 ping statistics --- 00:26:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.404 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:11.404 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3163517 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3163517 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3163517 ']' 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:11.664 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:11.664 [2024-06-10 14:34:49.075805] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:26:11.665 [2024-06-10 14:34:49.075859] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.665 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.665 [2024-06-10 14:34:49.147239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:11.665 [2024-06-10 14:34:49.215443] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.665 [2024-06-10 14:34:49.215481] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.665 [2024-06-10 14:34:49.215491] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.665 [2024-06-10 14:34:49.215500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.665 [2024-06-10 14:34:49.215508] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.665 [2024-06-10 14:34:49.215631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.665 [2024-06-10 14:34:49.215636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3163517 00:26:12.606 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:12.606 [2024-06-10 14:34:50.151395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.606 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:12.878 Malloc0 00:26:12.878 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:13.143 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.404 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.404 [2024-06-10 14:34:50.957954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.404 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:13.664 [2024-06-10 14:34:51.158471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3163882 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3163882 /var/tmp/bdevperf.sock 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 3163882 ']' 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:13.664 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.925 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:13.925 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:13.925 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:14.186 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:14.446 Nvme0n1 00:26:14.446 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.016 Nvme0n1 00:26:15.016 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:15.016 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:16.926 14:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:16.926 14:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:17.185 14:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:17.445 14:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:18.393 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:18.393 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.393 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.393 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.695 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.695 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.695 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.695 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.955 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.216 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.216 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.216 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.216 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.478 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.478 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.478 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.478 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.738 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.738 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:19.738 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.999 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:20.261 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:21.203 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:21.203 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:21.203 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.203 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.463 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.463 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:21.463 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.463 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.723 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.986 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.986 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.986 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.986 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.245 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.245 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.245 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.245 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.506 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.506 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:22.506 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:22.767 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.027 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.968 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.228 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.228 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.228 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.228 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.487 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.487 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.487 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.487 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.746 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.746 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:24.746 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.746 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:25.006 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.266 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:25.526 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:26.468 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:26.468 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:26.468 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.468 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.728 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.728 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:26.728 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.728 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.989 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.989 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.989 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.989 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.250 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.250 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.250 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.250 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.511 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.511 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.511 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.511 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.511 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.511 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:27.511 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.511 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.771 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.771 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:27.771 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.032 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.292 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:29.232 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:29.232 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.232 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.232 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.492 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.492 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.492 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.492 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.752 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.752 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.752 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.752 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.012 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.273 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.273 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.273 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.273 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.533 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.533 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:30.533 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:30.794 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.054 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:31.998 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:31.998 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:31.998 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.998 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.258 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.258 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.258 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.258 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.518 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.518 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.518 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.518 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.518 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.518 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.518 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.518 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.782 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.782 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.782 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.782 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.045 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.045 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.045 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.045 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.304 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.304 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.563 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.563 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.824 14:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.824 14:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.207 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.468 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.468 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.468 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.468 14:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.468 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.468 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.468 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.468 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.793 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.793 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.793 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.793 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.060 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.060 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.060 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.060 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.320 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.320 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:36.320 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.581 14:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.581 14:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.964 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.224 14:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.485 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.485 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.485 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.485 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.745 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.745 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.745 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.745 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.005 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.005 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:39.005 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:39.266 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:39.527 14:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:40.470 14:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:40.470 14:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.470 14:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.470 14:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.732 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.992 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.992 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.992 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.992 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.254 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.254 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:41.254 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.254 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.515 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.515 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:41.515 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.515 14:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.775 14:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.775 14:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:41.775 14:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:42.037 14:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:42.037 14:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.421 14:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.682 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.682 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.682 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.682 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.942 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.942 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.942 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.942 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:44.203 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.203 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:44.203 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.203 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:44.203 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3163882 ']' 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3163882' 00:26:44.468 killing process with pid 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3163882 00:26:44.468 Connection closed with partial response: 00:26:44.468 00:26:44.468 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3163882 00:26:44.468 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.468 [2024-06-10 14:34:51.220022] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:26:44.468 [2024-06-10 14:34:51.220075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163882 ] 00:26:44.468 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.468 [2024-06-10 14:34:51.268345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.468 [2024-06-10 14:34:51.320685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.468 Running I/O for 90 seconds... 00:26:44.468 [2024-06-10 14:35:05.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.468 [2024-06-10 14:35:05.502270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:44.468 [2024-06-10 14:35:05.502302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.468 [2024-06-10 14:35:05.502308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.502418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.502850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.502855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.469 [2024-06-10 14:35:05.503966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.503979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.503985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.503997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.504002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.504020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.504032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.469 [2024-06-10 14:35:05.504038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:44.469 [2024-06-10 14:35:05.504050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.470 [2024-06-10 14:35:05.504726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.470 [2024-06-10 14:35:05.504739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.470 [2024-06-10 14:35:05.504744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.471 [2024-06-10 14:35:05.504853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.504983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.504996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.471 [2024-06-10 14:35:05.505568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.471 [2024-06-10 14:35:05.505584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:05.505897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.472 [2024-06-10 14:35:05.505901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:44.472 [2024-06-10 14:35:19.598686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.472 [2024-06-10 14:35:19.598691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.598861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.598876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.598891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.598907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.598922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.598933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.598939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.599560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.599576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.599591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.473 [2024-06-10 14:35:19.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.599669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.473 [2024-06-10 14:35:19.599679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.473 [2024-06-10 14:35:19.599684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.474 [2024-06-10 14:35:19.599915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.599925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.599930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.474 [2024-06-10 14:35:19.600520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.474 [2024-06-10 14:35:19.600533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:44.474 Received shutdown signal, test time was about 29.229932 seconds 00:26:44.474 00:26:44.474 Latency(us) 00:26:44.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.474 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:44.474 Verification LBA range: start 0x0 length 0x4000 00:26:44.474 Nvme0n1 : 29.23 9491.97 37.08 0.00 0.00 13467.09 186.03 3019898.88 00:26:44.474 =================================================================================================================== 00:26:44.474 Total : 9491.97 37.08 0.00 0.00 13467.09 186.03 3019898.88 00:26:44.474 14:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.739 rmmod nvme_tcp 00:26:44.739 rmmod nvme_fabrics 00:26:44.739 rmmod nvme_keyring 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3163517 ']' 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3163517 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 3163517 ']' 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 3163517 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3163517 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3163517' 00:26:44.739 killing process with pid 3163517 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 3163517 00:26:44.739 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 3163517 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.001 14:35:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.549 14:35:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.549 00:26:47.549 real 0m42.816s 00:26:47.549 user 1m55.112s 00:26:47.549 sys 0m11.152s 00:26:47.549 14:35:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:47.549 14:35:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:47.549 ************************************ 00:26:47.549 END TEST nvmf_host_multipath_status 00:26:47.549 ************************************ 00:26:47.549 14:35:24 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:47.549 14:35:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:47.549 14:35:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:47.549 14:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.549 ************************************ 00:26:47.549 START TEST nvmf_discovery_remove_ifc 00:26:47.549 ************************************ 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:47.549 * Looking for test storage... 00:26:47.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.549 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.138 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:54.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:26:54.139 00:26:54.139 --- 10.0.0.2 ping statistics --- 00:26:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.139 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:54.139 00:26:54.139 --- 10.0.0.1 ping statistics --- 00:26:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.139 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3174085 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3174085 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 3174085 ']' 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.139 14:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:54.139 [2024-06-10 14:35:31.593328] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:26:54.139 [2024-06-10 14:35:31.593391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.139 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.139 [2024-06-10 14:35:31.662350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.400 [2024-06-10 14:35:31.735103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.400 [2024-06-10 14:35:31.735138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.400 [2024-06-10 14:35:31.735145] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.400 [2024-06-10 14:35:31.735152] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.400 [2024-06-10 14:35:31.735157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.400 [2024-06-10 14:35:31.735181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.972 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:54.972 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:54.972 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.972 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:54.972 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.973 [2024-06-10 14:35:32.502509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.973 [2024-06-10 14:35:32.510627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:54.973 null0 00:26:54.973 [2024-06-10 14:35:32.542669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3174430 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3174430 /tmp/host.sock 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 3174430 ']' 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:54.973 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.973 14:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:55.234 [2024-06-10 14:35:32.613357] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:26:55.234 [2024-06-10 14:35:32.613403] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174430 ] 00:26:55.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.234 [2024-06-10 14:35:32.690256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.234 [2024-06-10 14:35:32.754651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.176 14:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.194 [2024-06-10 14:35:34.550817] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:57.194 [2024-06-10 14:35:34.550838] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:57.194 [2024-06-10 14:35:34.550851] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.194 [2024-06-10 14:35:34.680295] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:57.194 [2024-06-10 14:35:34.741700] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:57.194 [2024-06-10 14:35:34.741753] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:57.194 [2024-06-10 14:35:34.741776] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:57.194 [2024-06-10 14:35:34.741790] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:57.194 [2024-06-10 14:35:34.741811] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.194 [2024-06-10 14:35:34.750302] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd30790 was disconnected and freed. delete nvme_qpair. 00:26:57.194 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.455 14:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.445 14:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.445 14:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.445 14:35:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.830 14:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.773 14:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.715 14:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.658 [2024-06-10 14:35:40.182235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:02.658 [2024-06-10 14:35:40.182282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.658 [2024-06-10 14:35:40.182296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.658 [2024-06-10 14:35:40.182306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.658 [2024-06-10 14:35:40.182317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.658 [2024-06-10 14:35:40.182326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.658 [2024-06-10 14:35:40.182333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.658 [2024-06-10 14:35:40.182341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.658 [2024-06-10 14:35:40.182348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.658 [2024-06-10 14:35:40.182357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.658 [2024-06-10 14:35:40.182363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.658 [2024-06-10 14:35:40.182376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7220 is same with the state(5) to be set 00:27:02.658 [2024-06-10 14:35:40.192252] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7220 (9): Bad file descriptor 00:27:02.658 [2024-06-10 14:35:40.202295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:02.658 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.658 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.658 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.658 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.658 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.659 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.659 14:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.043 [2024-06-10 14:35:41.264592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:04.043 [2024-06-10 14:35:41.264681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf7220 with addr=10.0.0.2, port=4420 00:27:04.044 [2024-06-10 14:35:41.264711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7220 is same with the state(5) to be set 00:27:04.044 [2024-06-10 14:35:41.264766] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7220 (9): Bad file descriptor 00:27:04.044 [2024-06-10 14:35:41.265772] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:04.044 [2024-06-10 14:35:41.265826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.044 [2024-06-10 14:35:41.265847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.044 [2024-06-10 14:35:41.265869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.044 [2024-06-10 14:35:41.265928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.044 [2024-06-10 14:35:41.265953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.044 14:35:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.044 14:35:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.044 14:35:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.989 [2024-06-10 14:35:42.268360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.989 [2024-06-10 14:35:42.268393] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:04.989 [2024-06-10 14:35:42.268416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.989 [2024-06-10 14:35:42.268426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-06-10 14:35:42.268435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.989 [2024-06-10 14:35:42.268443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-06-10 14:35:42.268450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.989 [2024-06-10 14:35:42.268457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-06-10 14:35:42.268465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.989 [2024-06-10 14:35:42.268476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-06-10 14:35:42.268484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.989 [2024-06-10 14:35:42.268491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.989 [2024-06-10 14:35:42.268498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:04.989 [2024-06-10 14:35:42.269136] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf66b0 (9): Bad file descriptor 00:27:04.989 [2024-06-10 14:35:42.270147] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:04.989 [2024-06-10 14:35:42.270158] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:04.989 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.989 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.989 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.989 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:04.990 14:35:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.933 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.193 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.193 14:35:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.766 [2024-06-10 14:35:44.286493] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.766 [2024-06-10 14:35:44.286510] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.766 [2024-06-10 14:35:44.286523] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:07.026 [2024-06-10 14:35:44.374815] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:07.026 [2024-06-10 14:35:44.559933] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.026 [2024-06-10 14:35:44.559974] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.026 [2024-06-10 14:35:44.559994] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.026 [2024-06-10 14:35:44.560009] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:07.026 [2024-06-10 14:35:44.560017] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.026 [2024-06-10 14:35:44.564963] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcfdfd0 was disconnected and freed. delete nvme_qpair. 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3174430 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 3174430 ']' 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 3174430 00:27:07.026 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3174430 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3174430' 00:27:07.287 killing process with pid 3174430 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 3174430 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 3174430 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.287 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.288 rmmod nvme_tcp 00:27:07.288 rmmod nvme_fabrics 00:27:07.288 rmmod nvme_keyring 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3174085 ']' 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3174085 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 3174085 ']' 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 3174085 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:07.288 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3174085 00:27:07.549 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:07.549 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:07.549 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3174085' 00:27:07.549 killing process with pid 3174085 00:27:07.549 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 3174085 00:27:07.549 14:35:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 3174085 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.549 14:35:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.096 14:35:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.096 00:27:10.096 real 0m22.534s 00:27:10.096 user 0m27.220s 00:27:10.096 sys 0m6.289s 00:27:10.096 14:35:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:10.096 14:35:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.096 ************************************ 00:27:10.096 END TEST nvmf_discovery_remove_ifc 00:27:10.096 ************************************ 00:27:10.096 14:35:47 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.096 14:35:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:10.096 14:35:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:10.096 14:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.096 ************************************ 00:27:10.096 START TEST nvmf_identify_kernel_target 00:27:10.096 ************************************ 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.096 * Looking for test storage... 00:27:10.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.096 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.097 14:35:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:16.689 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:16.689 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:16.689 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:16.689 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.689 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.690 14:35:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:27:16.690 00:27:16.690 --- 10.0.0.2 ping statistics --- 00:27:16.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.690 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:16.690 00:27:16.690 --- 10.0.0.1 ping statistics --- 00:27:16.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.690 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:16.690 14:35:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:20.046 Waiting for block devices as requested 00:27:20.046 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:20.308 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:20.308 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:20.308 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:20.569 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:20.569 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:20.569 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:20.830 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:20.830 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:20.830 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.090 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.090 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.090 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.351 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.351 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:21.351 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:21.612 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:21.612 No valid GPT data, bailing 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:21.612 00:27:21.612 Discovery Log Number of Records 2, Generation counter 2 00:27:21.612 =====Discovery Log Entry 0====== 00:27:21.612 trtype: tcp 00:27:21.612 adrfam: ipv4 00:27:21.612 subtype: current discovery subsystem 00:27:21.612 treq: not specified, sq flow control disable supported 00:27:21.612 portid: 1 00:27:21.612 trsvcid: 4420 00:27:21.612 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:21.612 traddr: 10.0.0.1 00:27:21.612 eflags: none 00:27:21.612 sectype: none 00:27:21.612 =====Discovery Log Entry 1====== 00:27:21.612 trtype: tcp 00:27:21.612 adrfam: ipv4 00:27:21.612 subtype: nvme subsystem 00:27:21.612 treq: not specified, sq flow control disable supported 00:27:21.612 portid: 1 00:27:21.612 trsvcid: 4420 00:27:21.612 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:21.612 traddr: 10.0.0.1 00:27:21.612 eflags: none 00:27:21.612 sectype: none 00:27:21.612 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:21.612 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:21.612 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.874 ===================================================== 00:27:21.874 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:21.874 ===================================================== 00:27:21.874 Controller Capabilities/Features 00:27:21.874 ================================ 00:27:21.874 Vendor ID: 0000 00:27:21.874 Subsystem Vendor ID: 0000 00:27:21.874 Serial Number: 1c189ffc6de080bb24bc 00:27:21.874 Model Number: Linux 00:27:21.874 Firmware Version: 6.7.0-68 00:27:21.874 Recommended Arb Burst: 0 00:27:21.874 IEEE OUI Identifier: 00 00 00 00:27:21.874 Multi-path I/O 00:27:21.874 May have multiple subsystem ports: No 00:27:21.874 May have multiple controllers: No 00:27:21.874 Associated with SR-IOV VF: No 00:27:21.874 Max Data Transfer Size: Unlimited 00:27:21.874 Max Number of Namespaces: 0 00:27:21.874 Max Number of I/O Queues: 1024 00:27:21.874 NVMe Specification Version (VS): 1.3 00:27:21.874 NVMe Specification Version (Identify): 1.3 00:27:21.874 Maximum Queue Entries: 1024 00:27:21.874 Contiguous Queues Required: No 00:27:21.874 Arbitration Mechanisms Supported 00:27:21.874 Weighted Round Robin: Not Supported 00:27:21.874 Vendor Specific: Not Supported 00:27:21.874 Reset Timeout: 7500 ms 00:27:21.874 Doorbell Stride: 4 bytes 00:27:21.874 NVM Subsystem Reset: Not Supported 00:27:21.874 Command Sets Supported 00:27:21.874 NVM Command Set: Supported 00:27:21.874 Boot Partition: Not Supported 00:27:21.874 Memory Page Size Minimum: 4096 bytes 00:27:21.874 Memory Page Size Maximum: 4096 bytes 00:27:21.874 Persistent Memory Region: Not Supported 00:27:21.874 Optional Asynchronous Events Supported 00:27:21.874 Namespace Attribute Notices: Not Supported 00:27:21.874 Firmware Activation Notices: Not Supported 00:27:21.874 ANA Change Notices: Not Supported 00:27:21.874 PLE Aggregate Log Change Notices: Not Supported 00:27:21.874 LBA Status Info Alert Notices: Not Supported 00:27:21.874 EGE Aggregate Log Change Notices: Not Supported 00:27:21.874 Normal NVM Subsystem Shutdown event: Not Supported 00:27:21.874 Zone Descriptor Change Notices: Not Supported 00:27:21.874 Discovery Log Change Notices: Supported 00:27:21.874 Controller Attributes 00:27:21.874 128-bit Host Identifier: Not Supported 00:27:21.874 Non-Operational Permissive Mode: Not Supported 00:27:21.874 NVM Sets: Not Supported 00:27:21.874 Read Recovery Levels: Not Supported 00:27:21.874 Endurance Groups: Not Supported 00:27:21.874 Predictable Latency Mode: Not Supported 00:27:21.874 Traffic Based Keep ALive: Not Supported 00:27:21.874 Namespace Granularity: Not Supported 00:27:21.874 SQ Associations: Not Supported 00:27:21.874 UUID List: Not Supported 00:27:21.874 Multi-Domain Subsystem: Not Supported 00:27:21.874 Fixed Capacity Management: Not Supported 00:27:21.874 Variable Capacity Management: Not Supported 00:27:21.874 Delete Endurance Group: Not Supported 00:27:21.874 Delete NVM Set: Not Supported 00:27:21.874 Extended LBA Formats Supported: Not Supported 00:27:21.874 Flexible Data Placement Supported: Not Supported 00:27:21.874 00:27:21.874 Controller Memory Buffer Support 00:27:21.874 ================================ 00:27:21.874 Supported: No 00:27:21.874 00:27:21.874 Persistent Memory Region Support 00:27:21.874 ================================ 00:27:21.874 Supported: No 00:27:21.874 00:27:21.874 Admin Command Set Attributes 00:27:21.874 ============================ 00:27:21.874 Security Send/Receive: Not Supported 00:27:21.874 Format NVM: Not Supported 00:27:21.874 Firmware Activate/Download: Not Supported 00:27:21.874 Namespace Management: Not Supported 00:27:21.874 Device Self-Test: Not Supported 00:27:21.874 Directives: Not Supported 00:27:21.874 NVMe-MI: Not Supported 00:27:21.874 Virtualization Management: Not Supported 00:27:21.874 Doorbell Buffer Config: Not Supported 00:27:21.874 Get LBA Status Capability: Not Supported 00:27:21.874 Command & Feature Lockdown Capability: Not Supported 00:27:21.874 Abort Command Limit: 1 00:27:21.874 Async Event Request Limit: 1 00:27:21.874 Number of Firmware Slots: N/A 00:27:21.874 Firmware Slot 1 Read-Only: N/A 00:27:21.874 Firmware Activation Without Reset: N/A 00:27:21.874 Multiple Update Detection Support: N/A 00:27:21.874 Firmware Update Granularity: No Information Provided 00:27:21.874 Per-Namespace SMART Log: No 00:27:21.874 Asymmetric Namespace Access Log Page: Not Supported 00:27:21.874 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:21.874 Command Effects Log Page: Not Supported 00:27:21.874 Get Log Page Extended Data: Supported 00:27:21.874 Telemetry Log Pages: Not Supported 00:27:21.874 Persistent Event Log Pages: Not Supported 00:27:21.874 Supported Log Pages Log Page: May Support 00:27:21.874 Commands Supported & Effects Log Page: Not Supported 00:27:21.875 Feature Identifiers & Effects Log Page:May Support 00:27:21.875 NVMe-MI Commands & Effects Log Page: May Support 00:27:21.875 Data Area 4 for Telemetry Log: Not Supported 00:27:21.875 Error Log Page Entries Supported: 1 00:27:21.875 Keep Alive: Not Supported 00:27:21.875 00:27:21.875 NVM Command Set Attributes 00:27:21.875 ========================== 00:27:21.875 Submission Queue Entry Size 00:27:21.875 Max: 1 00:27:21.875 Min: 1 00:27:21.875 Completion Queue Entry Size 00:27:21.875 Max: 1 00:27:21.875 Min: 1 00:27:21.875 Number of Namespaces: 0 00:27:21.875 Compare Command: Not Supported 00:27:21.875 Write Uncorrectable Command: Not Supported 00:27:21.875 Dataset Management Command: Not Supported 00:27:21.875 Write Zeroes Command: Not Supported 00:27:21.875 Set Features Save Field: Not Supported 00:27:21.875 Reservations: Not Supported 00:27:21.875 Timestamp: Not Supported 00:27:21.875 Copy: Not Supported 00:27:21.875 Volatile Write Cache: Not Present 00:27:21.875 Atomic Write Unit (Normal): 1 00:27:21.875 Atomic Write Unit (PFail): 1 00:27:21.875 Atomic Compare & Write Unit: 1 00:27:21.875 Fused Compare & Write: Not Supported 00:27:21.875 Scatter-Gather List 00:27:21.875 SGL Command Set: Supported 00:27:21.875 SGL Keyed: Not Supported 00:27:21.875 SGL Bit Bucket Descriptor: Not Supported 00:27:21.875 SGL Metadata Pointer: Not Supported 00:27:21.875 Oversized SGL: Not Supported 00:27:21.875 SGL Metadata Address: Not Supported 00:27:21.875 SGL Offset: Supported 00:27:21.875 Transport SGL Data Block: Not Supported 00:27:21.875 Replay Protected Memory Block: Not Supported 00:27:21.875 00:27:21.875 Firmware Slot Information 00:27:21.875 ========================= 00:27:21.875 Active slot: 0 00:27:21.875 00:27:21.875 00:27:21.875 Error Log 00:27:21.875 ========= 00:27:21.875 00:27:21.875 Active Namespaces 00:27:21.875 ================= 00:27:21.875 Discovery Log Page 00:27:21.875 ================== 00:27:21.875 Generation Counter: 2 00:27:21.875 Number of Records: 2 00:27:21.875 Record Format: 0 00:27:21.875 00:27:21.875 Discovery Log Entry 0 00:27:21.875 ---------------------- 00:27:21.875 Transport Type: 3 (TCP) 00:27:21.875 Address Family: 1 (IPv4) 00:27:21.875 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:21.875 Entry Flags: 00:27:21.875 Duplicate Returned Information: 0 00:27:21.875 Explicit Persistent Connection Support for Discovery: 0 00:27:21.875 Transport Requirements: 00:27:21.875 Secure Channel: Not Specified 00:27:21.875 Port ID: 1 (0x0001) 00:27:21.875 Controller ID: 65535 (0xffff) 00:27:21.875 Admin Max SQ Size: 32 00:27:21.875 Transport Service Identifier: 4420 00:27:21.875 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:21.875 Transport Address: 10.0.0.1 00:27:21.875 Discovery Log Entry 1 00:27:21.875 ---------------------- 00:27:21.875 Transport Type: 3 (TCP) 00:27:21.875 Address Family: 1 (IPv4) 00:27:21.875 Subsystem Type: 2 (NVM Subsystem) 00:27:21.875 Entry Flags: 00:27:21.875 Duplicate Returned Information: 0 00:27:21.875 Explicit Persistent Connection Support for Discovery: 0 00:27:21.875 Transport Requirements: 00:27:21.875 Secure Channel: Not Specified 00:27:21.875 Port ID: 1 (0x0001) 00:27:21.875 Controller ID: 65535 (0xffff) 00:27:21.875 Admin Max SQ Size: 32 00:27:21.875 Transport Service Identifier: 4420 00:27:21.875 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:21.875 Transport Address: 10.0.0.1 00:27:21.875 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.875 get_feature(0x01) failed 00:27:21.875 get_feature(0x02) failed 00:27:21.875 get_feature(0x04) failed 00:27:21.875 ===================================================== 00:27:21.875 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:21.875 ===================================================== 00:27:21.875 Controller Capabilities/Features 00:27:21.875 ================================ 00:27:21.875 Vendor ID: 0000 00:27:21.875 Subsystem Vendor ID: 0000 00:27:21.875 Serial Number: 0d961085e1875abdcb50 00:27:21.875 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:21.875 Firmware Version: 6.7.0-68 00:27:21.875 Recommended Arb Burst: 6 00:27:21.875 IEEE OUI Identifier: 00 00 00 00:27:21.875 Multi-path I/O 00:27:21.875 May have multiple subsystem ports: Yes 00:27:21.875 May have multiple controllers: Yes 00:27:21.875 Associated with SR-IOV VF: No 00:27:21.875 Max Data Transfer Size: Unlimited 00:27:21.875 Max Number of Namespaces: 1024 00:27:21.875 Max Number of I/O Queues: 128 00:27:21.875 NVMe Specification Version (VS): 1.3 00:27:21.875 NVMe Specification Version (Identify): 1.3 00:27:21.875 Maximum Queue Entries: 1024 00:27:21.875 Contiguous Queues Required: No 00:27:21.875 Arbitration Mechanisms Supported 00:27:21.875 Weighted Round Robin: Not Supported 00:27:21.875 Vendor Specific: Not Supported 00:27:21.875 Reset Timeout: 7500 ms 00:27:21.875 Doorbell Stride: 4 bytes 00:27:21.875 NVM Subsystem Reset: Not Supported 00:27:21.875 Command Sets Supported 00:27:21.875 NVM Command Set: Supported 00:27:21.875 Boot Partition: Not Supported 00:27:21.875 Memory Page Size Minimum: 4096 bytes 00:27:21.875 Memory Page Size Maximum: 4096 bytes 00:27:21.875 Persistent Memory Region: Not Supported 00:27:21.875 Optional Asynchronous Events Supported 00:27:21.875 Namespace Attribute Notices: Supported 00:27:21.875 Firmware Activation Notices: Not Supported 00:27:21.875 ANA Change Notices: Supported 00:27:21.875 PLE Aggregate Log Change Notices: Not Supported 00:27:21.875 LBA Status Info Alert Notices: Not Supported 00:27:21.875 EGE Aggregate Log Change Notices: Not Supported 00:27:21.875 Normal NVM Subsystem Shutdown event: Not Supported 00:27:21.875 Zone Descriptor Change Notices: Not Supported 00:27:21.875 Discovery Log Change Notices: Not Supported 00:27:21.875 Controller Attributes 00:27:21.875 128-bit Host Identifier: Supported 00:27:21.875 Non-Operational Permissive Mode: Not Supported 00:27:21.875 NVM Sets: Not Supported 00:27:21.875 Read Recovery Levels: Not Supported 00:27:21.875 Endurance Groups: Not Supported 00:27:21.875 Predictable Latency Mode: Not Supported 00:27:21.875 Traffic Based Keep ALive: Supported 00:27:21.875 Namespace Granularity: Not Supported 00:27:21.875 SQ Associations: Not Supported 00:27:21.875 UUID List: Not Supported 00:27:21.875 Multi-Domain Subsystem: Not Supported 00:27:21.875 Fixed Capacity Management: Not Supported 00:27:21.875 Variable Capacity Management: Not Supported 00:27:21.875 Delete Endurance Group: Not Supported 00:27:21.875 Delete NVM Set: Not Supported 00:27:21.875 Extended LBA Formats Supported: Not Supported 00:27:21.875 Flexible Data Placement Supported: Not Supported 00:27:21.875 00:27:21.875 Controller Memory Buffer Support 00:27:21.875 ================================ 00:27:21.875 Supported: No 00:27:21.875 00:27:21.875 Persistent Memory Region Support 00:27:21.875 ================================ 00:27:21.875 Supported: No 00:27:21.875 00:27:21.875 Admin Command Set Attributes 00:27:21.875 ============================ 00:27:21.875 Security Send/Receive: Not Supported 00:27:21.875 Format NVM: Not Supported 00:27:21.875 Firmware Activate/Download: Not Supported 00:27:21.875 Namespace Management: Not Supported 00:27:21.875 Device Self-Test: Not Supported 00:27:21.875 Directives: Not Supported 00:27:21.875 NVMe-MI: Not Supported 00:27:21.875 Virtualization Management: Not Supported 00:27:21.875 Doorbell Buffer Config: Not Supported 00:27:21.875 Get LBA Status Capability: Not Supported 00:27:21.875 Command & Feature Lockdown Capability: Not Supported 00:27:21.875 Abort Command Limit: 4 00:27:21.875 Async Event Request Limit: 4 00:27:21.875 Number of Firmware Slots: N/A 00:27:21.875 Firmware Slot 1 Read-Only: N/A 00:27:21.875 Firmware Activation Without Reset: N/A 00:27:21.875 Multiple Update Detection Support: N/A 00:27:21.875 Firmware Update Granularity: No Information Provided 00:27:21.875 Per-Namespace SMART Log: Yes 00:27:21.875 Asymmetric Namespace Access Log Page: Supported 00:27:21.875 ANA Transition Time : 10 sec 00:27:21.875 00:27:21.875 Asymmetric Namespace Access Capabilities 00:27:21.875 ANA Optimized State : Supported 00:27:21.875 ANA Non-Optimized State : Supported 00:27:21.875 ANA Inaccessible State : Supported 00:27:21.875 ANA Persistent Loss State : Supported 00:27:21.875 ANA Change State : Supported 00:27:21.875 ANAGRPID is not changed : No 00:27:21.875 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:21.875 00:27:21.875 ANA Group Identifier Maximum : 128 00:27:21.875 Number of ANA Group Identifiers : 128 00:27:21.876 Max Number of Allowed Namespaces : 1024 00:27:21.876 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:21.876 Command Effects Log Page: Supported 00:27:21.876 Get Log Page Extended Data: Supported 00:27:21.876 Telemetry Log Pages: Not Supported 00:27:21.876 Persistent Event Log Pages: Not Supported 00:27:21.876 Supported Log Pages Log Page: May Support 00:27:21.876 Commands Supported & Effects Log Page: Not Supported 00:27:21.876 Feature Identifiers & Effects Log Page:May Support 00:27:21.876 NVMe-MI Commands & Effects Log Page: May Support 00:27:21.876 Data Area 4 for Telemetry Log: Not Supported 00:27:21.876 Error Log Page Entries Supported: 128 00:27:21.876 Keep Alive: Supported 00:27:21.876 Keep Alive Granularity: 1000 ms 00:27:21.876 00:27:21.876 NVM Command Set Attributes 00:27:21.876 ========================== 00:27:21.876 Submission Queue Entry Size 00:27:21.876 Max: 64 00:27:21.876 Min: 64 00:27:21.876 Completion Queue Entry Size 00:27:21.876 Max: 16 00:27:21.876 Min: 16 00:27:21.876 Number of Namespaces: 1024 00:27:21.876 Compare Command: Not Supported 00:27:21.876 Write Uncorrectable Command: Not Supported 00:27:21.876 Dataset Management Command: Supported 00:27:21.876 Write Zeroes Command: Supported 00:27:21.876 Set Features Save Field: Not Supported 00:27:21.876 Reservations: Not Supported 00:27:21.876 Timestamp: Not Supported 00:27:21.876 Copy: Not Supported 00:27:21.876 Volatile Write Cache: Present 00:27:21.876 Atomic Write Unit (Normal): 1 00:27:21.876 Atomic Write Unit (PFail): 1 00:27:21.876 Atomic Compare & Write Unit: 1 00:27:21.876 Fused Compare & Write: Not Supported 00:27:21.876 Scatter-Gather List 00:27:21.876 SGL Command Set: Supported 00:27:21.876 SGL Keyed: Not Supported 00:27:21.876 SGL Bit Bucket Descriptor: Not Supported 00:27:21.876 SGL Metadata Pointer: Not Supported 00:27:21.876 Oversized SGL: Not Supported 00:27:21.876 SGL Metadata Address: Not Supported 00:27:21.876 SGL Offset: Supported 00:27:21.876 Transport SGL Data Block: Not Supported 00:27:21.876 Replay Protected Memory Block: Not Supported 00:27:21.876 00:27:21.876 Firmware Slot Information 00:27:21.876 ========================= 00:27:21.876 Active slot: 0 00:27:21.876 00:27:21.876 Asymmetric Namespace Access 00:27:21.876 =========================== 00:27:21.876 Change Count : 0 00:27:21.876 Number of ANA Group Descriptors : 1 00:27:21.876 ANA Group Descriptor : 0 00:27:21.876 ANA Group ID : 1 00:27:21.876 Number of NSID Values : 1 00:27:21.876 Change Count : 0 00:27:21.876 ANA State : 1 00:27:21.876 Namespace Identifier : 1 00:27:21.876 00:27:21.876 Commands Supported and Effects 00:27:21.876 ============================== 00:27:21.876 Admin Commands 00:27:21.876 -------------- 00:27:21.876 Get Log Page (02h): Supported 00:27:21.876 Identify (06h): Supported 00:27:21.876 Abort (08h): Supported 00:27:21.876 Set Features (09h): Supported 00:27:21.876 Get Features (0Ah): Supported 00:27:21.876 Asynchronous Event Request (0Ch): Supported 00:27:21.876 Keep Alive (18h): Supported 00:27:21.876 I/O Commands 00:27:21.876 ------------ 00:27:21.876 Flush (00h): Supported 00:27:21.876 Write (01h): Supported LBA-Change 00:27:21.876 Read (02h): Supported 00:27:21.876 Write Zeroes (08h): Supported LBA-Change 00:27:21.876 Dataset Management (09h): Supported 00:27:21.876 00:27:21.876 Error Log 00:27:21.876 ========= 00:27:21.876 Entry: 0 00:27:21.876 Error Count: 0x3 00:27:21.876 Submission Queue Id: 0x0 00:27:21.876 Command Id: 0x5 00:27:21.876 Phase Bit: 0 00:27:21.876 Status Code: 0x2 00:27:21.876 Status Code Type: 0x0 00:27:21.876 Do Not Retry: 1 00:27:21.876 Error Location: 0x28 00:27:21.876 LBA: 0x0 00:27:21.876 Namespace: 0x0 00:27:21.876 Vendor Log Page: 0x0 00:27:21.876 ----------- 00:27:21.876 Entry: 1 00:27:21.876 Error Count: 0x2 00:27:21.876 Submission Queue Id: 0x0 00:27:21.876 Command Id: 0x5 00:27:21.876 Phase Bit: 0 00:27:21.876 Status Code: 0x2 00:27:21.876 Status Code Type: 0x0 00:27:21.876 Do Not Retry: 1 00:27:21.876 Error Location: 0x28 00:27:21.876 LBA: 0x0 00:27:21.876 Namespace: 0x0 00:27:21.876 Vendor Log Page: 0x0 00:27:21.876 ----------- 00:27:21.876 Entry: 2 00:27:21.876 Error Count: 0x1 00:27:21.876 Submission Queue Id: 0x0 00:27:21.876 Command Id: 0x4 00:27:21.876 Phase Bit: 0 00:27:21.876 Status Code: 0x2 00:27:21.876 Status Code Type: 0x0 00:27:21.876 Do Not Retry: 1 00:27:21.876 Error Location: 0x28 00:27:21.876 LBA: 0x0 00:27:21.876 Namespace: 0x0 00:27:21.876 Vendor Log Page: 0x0 00:27:21.876 00:27:21.876 Number of Queues 00:27:21.876 ================ 00:27:21.876 Number of I/O Submission Queues: 128 00:27:21.876 Number of I/O Completion Queues: 128 00:27:21.876 00:27:21.876 ZNS Specific Controller Data 00:27:21.876 ============================ 00:27:21.876 Zone Append Size Limit: 0 00:27:21.876 00:27:21.876 00:27:21.876 Active Namespaces 00:27:21.876 ================= 00:27:21.876 get_feature(0x05) failed 00:27:21.876 Namespace ID:1 00:27:21.876 Command Set Identifier: NVM (00h) 00:27:21.876 Deallocate: Supported 00:27:21.876 Deallocated/Unwritten Error: Not Supported 00:27:21.876 Deallocated Read Value: Unknown 00:27:21.876 Deallocate in Write Zeroes: Not Supported 00:27:21.876 Deallocated Guard Field: 0xFFFF 00:27:21.876 Flush: Supported 00:27:21.876 Reservation: Not Supported 00:27:21.876 Namespace Sharing Capabilities: Multiple Controllers 00:27:21.876 Size (in LBAs): 3750748848 (1788GiB) 00:27:21.876 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:21.876 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:21.876 UUID: 2ab8bcac-e4a7-4018-a006-86caf9b395d6 00:27:21.876 Thin Provisioning: Not Supported 00:27:21.876 Per-NS Atomic Units: Yes 00:27:21.876 Atomic Write Unit (Normal): 8 00:27:21.876 Atomic Write Unit (PFail): 8 00:27:21.876 Preferred Write Granularity: 8 00:27:21.876 Atomic Compare & Write Unit: 8 00:27:21.876 Atomic Boundary Size (Normal): 0 00:27:21.876 Atomic Boundary Size (PFail): 0 00:27:21.876 Atomic Boundary Offset: 0 00:27:21.876 NGUID/EUI64 Never Reused: No 00:27:21.876 ANA group ID: 1 00:27:21.876 Namespace Write Protected: No 00:27:21.876 Number of LBA Formats: 1 00:27:21.876 Current LBA Format: LBA Format #00 00:27:21.876 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:21.876 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.876 rmmod nvme_tcp 00:27:21.876 rmmod nvme_fabrics 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.876 14:35:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:24.469 14:36:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:27.773 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:27.773 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:27.773 00:27:27.773 real 0m17.849s 00:27:27.773 user 0m4.775s 00:27:27.773 sys 0m10.033s 00:27:27.773 14:36:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:27.773 14:36:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 ************************************ 00:27:27.773 END TEST nvmf_identify_kernel_target 00:27:27.773 ************************************ 00:27:27.773 14:36:05 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:27.773 14:36:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:27.773 14:36:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:27.773 14:36:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.773 ************************************ 00:27:27.773 START TEST nvmf_auth_host 00:27:27.773 ************************************ 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:27.773 * Looking for test storage... 00:27:27.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.773 14:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.913 14:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.913 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:35.914 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:35.914 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:35.914 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:35.914 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:27:35.914 00:27:35.914 --- 10.0.0.2 ping statistics --- 00:27:35.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.914 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:27:35.914 00:27:35.914 --- 10.0.0.1 ping statistics --- 00:27:35.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.914 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.914 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3188826 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3188826 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3188826 ']' 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:35.915 14:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96e01a24896efd12afff7bbc1e655697 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5kI 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96e01a24896efd12afff7bbc1e655697 0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96e01a24896efd12afff7bbc1e655697 0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96e01a24896efd12afff7bbc1e655697 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5kI 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5kI 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5kI 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=41f09d71f9fc620d645869cdc94f274c869b198a7cdd8092de62c1d21c1caad7 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DHC 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 41f09d71f9fc620d645869cdc94f274c869b198a7cdd8092de62c1d21c1caad7 3 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 41f09d71f9fc620d645869cdc94f274c869b198a7cdd8092de62c1d21c1caad7 3 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=41f09d71f9fc620d645869cdc94f274c869b198a7cdd8092de62c1d21c1caad7 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DHC 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DHC 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DHC 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b11ec069195d0dbcaec51c97bf6c0106422820059bf202b1 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZSB 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b11ec069195d0dbcaec51c97bf6c0106422820059bf202b1 0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b11ec069195d0dbcaec51c97bf6c0106422820059bf202b1 0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b11ec069195d0dbcaec51c97bf6c0106422820059bf202b1 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:35.915 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.176 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZSB 00:27:36.176 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZSB 00:27:36.176 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZSB 00:27:36.176 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:36.176 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=56c2b1105c4bd51c88738df687bced134af8bcbb593204ea 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AZo 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 56c2b1105c4bd51c88738df687bced134af8bcbb593204ea 2 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 56c2b1105c4bd51c88738df687bced134af8bcbb593204ea 2 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=56c2b1105c4bd51c88738df687bced134af8bcbb593204ea 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AZo 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AZo 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.AZo 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3e46f11698b746870ad4804c6abff22 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RSM 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3e46f11698b746870ad4804c6abff22 1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3e46f11698b746870ad4804c6abff22 1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3e46f11698b746870ad4804c6abff22 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RSM 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RSM 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RSM 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b827e927964ff38a9f28826d046df71c 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vdc 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b827e927964ff38a9f28826d046df71c 1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b827e927964ff38a9f28826d046df71c 1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b827e927964ff38a9f28826d046df71c 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vdc 00:27:36.177 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vdc 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.vdc 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae5b8b6aa6207dddee5a6e82720dee212299e1a01bd8ba77 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HUh 00:27:36.438 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae5b8b6aa6207dddee5a6e82720dee212299e1a01bd8ba77 2 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae5b8b6aa6207dddee5a6e82720dee212299e1a01bd8ba77 2 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae5b8b6aa6207dddee5a6e82720dee212299e1a01bd8ba77 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HUh 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HUh 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HUh 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87d8233a790154fcbb65d4930d7b6043 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RzD 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87d8233a790154fcbb65d4930d7b6043 0 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87d8233a790154fcbb65d4930d7b6043 0 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87d8233a790154fcbb65d4930d7b6043 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RzD 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RzD 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RzD 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b94adfd7f19ea05a3d7fd6feafe9684993ded6a9324ec9f2c1124aa3dc906d56 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ClY 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b94adfd7f19ea05a3d7fd6feafe9684993ded6a9324ec9f2c1124aa3dc906d56 3 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b94adfd7f19ea05a3d7fd6feafe9684993ded6a9324ec9f2c1124aa3dc906d56 3 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b94adfd7f19ea05a3d7fd6feafe9684993ded6a9324ec9f2c1124aa3dc906d56 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ClY 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ClY 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ClY 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3188826 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 3188826 ']' 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:36.439 14:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5kI 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DHC ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DHC 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZSB 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.AZo ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AZo 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RSM 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.vdc ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vdc 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HUh 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RzD ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RzD 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ClY 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.700 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:36.961 14:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:40.262 Waiting for block devices as requested 00:27:40.262 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:40.262 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:40.262 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:40.262 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:40.262 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:40.522 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:40.523 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:40.523 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:40.783 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:40.783 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:41.043 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:41.043 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:41.043 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:41.043 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:41.304 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:41.304 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:41.304 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:41.874 14:36:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:42.135 No valid GPT data, bailing 00:27:42.135 14:36:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:42.136 00:27:42.136 Discovery Log Number of Records 2, Generation counter 2 00:27:42.136 =====Discovery Log Entry 0====== 00:27:42.136 trtype: tcp 00:27:42.136 adrfam: ipv4 00:27:42.136 subtype: current discovery subsystem 00:27:42.136 treq: not specified, sq flow control disable supported 00:27:42.136 portid: 1 00:27:42.136 trsvcid: 4420 00:27:42.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:42.136 traddr: 10.0.0.1 00:27:42.136 eflags: none 00:27:42.136 sectype: none 00:27:42.136 =====Discovery Log Entry 1====== 00:27:42.136 trtype: tcp 00:27:42.136 adrfam: ipv4 00:27:42.136 subtype: nvme subsystem 00:27:42.136 treq: not specified, sq flow control disable supported 00:27:42.136 portid: 1 00:27:42.136 trsvcid: 4420 00:27:42.136 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:42.136 traddr: 10.0.0.1 00:27:42.136 eflags: none 00:27:42.136 sectype: none 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.136 nvme0n1 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.136 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.398 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.399 nvme0n1 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.399 14:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.660 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.661 nvme0n1 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.661 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.922 nvme0n1 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.922 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.184 nvme0n1 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.184 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.446 nvme0n1 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.446 14:36:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.707 nvme0n1 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.707 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.708 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.968 nvme0n1 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:43.968 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.969 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 nvme0n1 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.229 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.490 nvme0n1 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.490 14:36:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.759 nvme0n1 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.760 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.019 nvme0n1 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.019 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.020 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.280 nvme0n1 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.280 14:36:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.541 nvme0n1 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.541 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.802 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.063 nvme0n1 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.063 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.064 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.324 nvme0n1 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.324 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.325 14:36:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 nvme0n1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.895 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.466 nvme0n1 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.466 14:36:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 nvme0n1 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.079 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.080 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.340 nvme0n1 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.340 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.600 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.601 14:36:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.861 nvme0n1 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.861 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.122 14:36:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.693 nvme0n1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.693 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.694 14:36:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.635 nvme0n1 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.635 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 nvme0n1 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:51.576 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.577 14:36:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.147 nvme0n1 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.147 14:36:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.089 nvme0n1 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.089 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 nvme0n1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.090 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 nvme0n1 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:53.350 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.351 14:36:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.611 nvme0n1 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.611 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.871 nvme0n1 00:27:53.871 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.871 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.872 nvme0n1 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.872 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.133 nvme0n1 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.133 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:54.394 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.395 nvme0n1 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.395 14:36:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 nvme0n1 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.656 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 nvme0n1 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.917 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.179 nvme0n1 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.179 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.441 14:36:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.701 nvme0n1 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.701 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.702 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.961 nvme0n1 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.961 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.962 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.220 nvme0n1 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.220 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.479 14:36:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 nvme0n1 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.740 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 nvme0n1 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.001 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.577 nvme0n1 00:27:57.577 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.577 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.577 14:36:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.577 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.577 14:36:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.577 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.578 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.148 nvme0n1 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.148 14:36:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.721 nvme0n1 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:27:58.721 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.722 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.982 nvme0n1 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.982 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.242 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.243 14:36:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.503 nvme0n1 00:27:59.503 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.503 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.503 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.503 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.503 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.764 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.335 nvme0n1 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.335 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.595 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.596 14:36:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.596 14:36:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.596 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.596 14:36:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.166 nvme0n1 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.166 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.426 14:36:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.996 nvme0n1 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.996 14:36:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.938 nvme0n1 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.938 14:36:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 nvme0n1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 nvme0n1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.882 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.143 nvme0n1 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.144 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.405 nvme0n1 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.405 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.406 nvme0n1 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.406 14:36:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 nvme0n1 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.668 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.669 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:04.669 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.669 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 nvme0n1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.930 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.224 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.224 nvme0n1 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.225 14:36:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.486 nvme0n1 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.486 14:36:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.486 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.487 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.747 nvme0n1 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.747 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.008 nvme0n1 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.008 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.268 nvme0n1 00:28:06.268 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.268 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.269 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.528 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.529 14:36:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.790 nvme0n1 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.790 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.051 nvme0n1 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.051 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.312 nvme0n1 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.312 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.573 14:36:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.574 14:36:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.574 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.574 14:36:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.834 nvme0n1 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.834 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.405 nvme0n1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.405 14:36:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.665 nvme0n1 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.665 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.926 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.187 nvme0n1 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:09.187 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.188 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.449 14:36:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.710 nvme0n1 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.710 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.970 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.230 nvme0n1 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTZlMDFhMjQ4OTZlZmQxMmFmZmY3YmJjMWU2NTU2OTdAQXFZ: 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: ]] 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFmMDlkNzFmOWZjNjIwZDY0NTg2OWNkYzk0ZjI3NGM4NjliMTk4YTdjZGQ4MDkyZGU2MmMxZDIxYzFjYWFkN2PsA6E=: 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.230 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.491 14:36:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.061 nvme0n1 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.061 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.062 14:36:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 nvme0n1 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDNlNDZmMTE2OThiNzQ2ODcwYWQ0ODA0YzZhYmZmMjL1nq17: 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjgyN2U5Mjc5NjRmZjM4YTlmMjg4MjZkMDQ2ZGY3MWPEEnXR: 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.003 14:36:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.944 nvme0n1 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.944 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWU1YjhiNmFhNjIwN2RkZGVlNWE2ZTgyNzIwZGVlMjEyMjk5ZTFhMDFiZDhiYTc3rNENhQ==: 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODdkODIzM2E3OTAxNTRmY2JiNjVkNDkzMGQ3YjYwNDPesXq6: 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.945 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.516 nvme0n1 00:28:13.516 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.516 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.516 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.516 14:36:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.516 14:36:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0YWRmZDdmMTllYTA1YTNkN2ZkNmZlYWZlOTY4NDk5M2RlZDZhOTMyNGVjOWYyYzExMjRhYTNkYzkwNmQ1Nqw6wfw=: 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.516 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.457 nvme0n1 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjExZWMwNjkxOTVkMGRiY2FlYzUxYzk3YmY2YzAxMDY0MjI4MjAwNTliZjIwMmIxBwQZ1g==: 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTZjMmIxMTA1YzRiZDUxYzg4NzM4ZGY2ODdiY2VkMTM0YWY4YmNiYjU5MzIwNGVhkr3gvA==: 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.457 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.458 request: 00:28:14.458 { 00:28:14.458 "name": "nvme0", 00:28:14.458 "trtype": "tcp", 00:28:14.458 "traddr": "10.0.0.1", 00:28:14.458 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:14.458 "adrfam": "ipv4", 00:28:14.458 "trsvcid": "4420", 00:28:14.458 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:14.458 "method": "bdev_nvme_attach_controller", 00:28:14.458 "req_id": 1 00:28:14.458 } 00:28:14.458 Got JSON-RPC error response 00:28:14.458 response: 00:28:14.458 { 00:28:14.458 "code": -5, 00:28:14.458 "message": "Input/output error" 00:28:14.458 } 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.458 14:36:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.458 request: 00:28:14.458 { 00:28:14.458 "name": "nvme0", 00:28:14.458 "trtype": "tcp", 00:28:14.458 "traddr": "10.0.0.1", 00:28:14.458 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:14.458 "adrfam": "ipv4", 00:28:14.458 "trsvcid": "4420", 00:28:14.458 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:14.458 "dhchap_key": "key2", 00:28:14.458 "method": "bdev_nvme_attach_controller", 00:28:14.458 "req_id": 1 00:28:14.458 } 00:28:14.458 Got JSON-RPC error response 00:28:14.458 response: 00:28:14.458 { 00:28:14.458 "code": -5, 00:28:14.458 "message": "Input/output error" 00:28:14.458 } 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.458 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.717 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.718 request: 00:28:14.718 { 00:28:14.718 "name": "nvme0", 00:28:14.718 "trtype": "tcp", 00:28:14.718 "traddr": "10.0.0.1", 00:28:14.718 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:14.718 "adrfam": "ipv4", 00:28:14.718 "trsvcid": "4420", 00:28:14.718 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:14.718 "dhchap_key": "key1", 00:28:14.718 "dhchap_ctrlr_key": "ckey2", 00:28:14.718 "method": "bdev_nvme_attach_controller", 00:28:14.718 "req_id": 1 00:28:14.718 } 00:28:14.718 Got JSON-RPC error response 00:28:14.718 response: 00:28:14.718 { 00:28:14.718 "code": -5, 00:28:14.718 "message": "Input/output error" 00:28:14.718 } 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.718 rmmod nvme_tcp 00:28:14.718 rmmod nvme_fabrics 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3188826 ']' 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3188826 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 3188826 ']' 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 3188826 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3188826 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3188826' 00:28:14.718 killing process with pid 3188826 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 3188826 00:28:14.718 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 3188826 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.978 14:36:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:16.887 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:17.148 14:36:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:20.446 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:20.446 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:20.446 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5kI /tmp/spdk.key-null.ZSB /tmp/spdk.key-sha256.RSM /tmp/spdk.key-sha384.HUh /tmp/spdk.key-sha512.ClY /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:20.446 14:36:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.751 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:23.751 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:23.751 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:23.752 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:23.752 00:28:23.752 real 0m55.770s 00:28:23.752 user 0m50.261s 00:28:23.752 sys 0m14.250s 00:28:23.752 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:23.752 14:37:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.752 ************************************ 00:28:23.752 END TEST nvmf_auth_host 00:28:23.752 ************************************ 00:28:23.752 14:37:00 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:23.752 14:37:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:23.752 14:37:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:23.752 14:37:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:23.752 14:37:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.752 ************************************ 00:28:23.752 START TEST nvmf_digest 00:28:23.752 ************************************ 00:28:23.752 14:37:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:23.752 * Looking for test storage... 00:28:23.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.752 14:37:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:30.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:30.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:30.412 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:30.412 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:30.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:28:30.412 00:28:30.412 --- 10.0.0.2 ping statistics --- 00:28:30.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.412 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:28:30.412 00:28:30.412 --- 10.0.0.1 ping statistics --- 00:28:30.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.412 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:30.412 14:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:30.413 14:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:30.413 14:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:30.413 14:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:30.413 14:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.673 ************************************ 00:28:30.673 START TEST nvmf_digest_clean 00:28:30.673 ************************************ 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3205062 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3205062 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3205062 ']' 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:30.673 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.673 [2024-06-10 14:37:08.094296] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:30.673 [2024-06-10 14:37:08.094362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.673 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.673 [2024-06-10 14:37:08.180378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.934 [2024-06-10 14:37:08.273765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.934 [2024-06-10 14:37:08.273821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.934 [2024-06-10 14:37:08.273829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.934 [2024-06-10 14:37:08.273836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.934 [2024-06-10 14:37:08.273843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.934 [2024-06-10 14:37:08.273870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.507 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:31.507 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:31.507 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:31.507 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:31.507 14:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.507 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.507 null0 00:28:31.767 [2024-06-10 14:37:09.103399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.767 [2024-06-10 14:37:09.127644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3205175 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3205175 /var/tmp/bperf.sock 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3205175 ']' 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:31.767 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.768 [2024-06-10 14:37:09.185732] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:31.768 [2024-06-10 14:37:09.185793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205175 ] 00:28:31.768 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.768 [2024-06-10 14:37:09.248720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.768 [2024-06-10 14:37:09.322951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.768 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:31.768 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:31.768 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.768 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.768 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.028 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.028 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.288 nvme0n1 00:28:32.547 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.547 14:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.547 Running I/O for 2 seconds... 00:28:34.458 00:28:34.458 Latency(us) 00:28:34.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:34.458 nvme0n1 : 2.00 19598.57 76.56 0.00 0.00 6523.85 3208.53 21299.20 00:28:34.458 =================================================================================================================== 00:28:34.458 Total : 19598.57 76.56 0.00 0.00 6523.85 3208.53 21299.20 00:28:34.458 0 00:28:34.458 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.458 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:34.458 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.458 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.458 | select(.opcode=="crc32c") 00:28:34.458 | "\(.module_name) \(.executed)"' 00:28:34.458 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3205175 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3205175 ']' 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3205175 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3205175 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3205175' 00:28:34.722 killing process with pid 3205175 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3205175 00:28:34.722 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.722 00:28:34.722 Latency(us) 00:28:34.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.722 =================================================================================================================== 00:28:34.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.722 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3205175 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3205773 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3205773 /var/tmp/bperf.sock 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3205773 ']' 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:34.984 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.984 [2024-06-10 14:37:12.478101] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:34.984 [2024-06-10 14:37:12.478158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205773 ] 00:28:34.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.984 Zero copy mechanism will not be used. 00:28:34.984 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.984 [2024-06-10 14:37:12.539644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.244 [2024-06-10 14:37:12.605194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.244 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:35.244 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:35.244 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.244 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.244 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:35.503 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.503 14:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.763 nvme0n1 00:28:35.763 14:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:35.763 14:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.763 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.763 Zero copy mechanism will not be used. 00:28:35.763 Running I/O for 2 seconds... 00:28:37.673 00:28:37.673 Latency(us) 00:28:37.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.673 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:37.673 nvme0n1 : 2.00 4762.94 595.37 0.00 0.00 3355.37 737.28 8628.91 00:28:37.673 =================================================================================================================== 00:28:37.673 Total : 4762.94 595.37 0.00 0.00 3355.37 737.28 8628.91 00:28:37.673 0 00:28:37.673 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:37.673 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:37.673 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:37.673 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:37.673 | select(.opcode=="crc32c") 00:28:37.673 | "\(.module_name) \(.executed)"' 00:28:37.673 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3205773 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3205773 ']' 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3205773 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3205773 00:28:37.933 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3205773' 00:28:38.194 killing process with pid 3205773 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3205773 00:28:38.194 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.194 00:28:38.194 Latency(us) 00:28:38.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.194 =================================================================================================================== 00:28:38.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3205773 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3206451 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3206451 /var/tmp/bperf.sock 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3206451 ']' 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:38.194 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.194 [2024-06-10 14:37:15.706604] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:38.194 [2024-06-10 14:37:15.706660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206451 ] 00:28:38.194 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.194 [2024-06-10 14:37:15.763875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.454 [2024-06-10 14:37:15.827123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.454 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:38.454 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:38.454 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:38.454 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:38.454 14:37:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:38.714 14:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.715 14:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.975 nvme0n1 00:28:38.975 14:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:38.975 14:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:39.234 Running I/O for 2 seconds... 00:28:41.143 00:28:41.143 Latency(us) 00:28:41.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.143 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.143 nvme0n1 : 2.01 21611.17 84.42 0.00 0.00 5910.33 3003.73 15073.28 00:28:41.143 =================================================================================================================== 00:28:41.143 Total : 21611.17 84.42 0.00 0.00 5910.33 3003.73 15073.28 00:28:41.143 0 00:28:41.143 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:41.143 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:41.143 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:41.143 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:41.143 | select(.opcode=="crc32c") 00:28:41.143 | "\(.module_name) \(.executed)"' 00:28:41.143 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3206451 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3206451 ']' 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3206451 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3206451 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3206451' 00:28:41.404 killing process with pid 3206451 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3206451 00:28:41.404 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.404 00:28:41.404 Latency(us) 00:28:41.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.404 =================================================================================================================== 00:28:41.404 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.404 14:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3206451 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3207127 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3207127 /var/tmp/bperf.sock 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 3207127 ']' 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:41.664 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.664 [2024-06-10 14:37:19.138132] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:41.665 [2024-06-10 14:37:19.138207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207127 ] 00:28:41.665 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.665 Zero copy mechanism will not be used. 00:28:41.665 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.665 [2024-06-10 14:37:19.197026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.925 [2024-06-10 14:37:19.260391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.925 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:41.925 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:41.925 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:41.925 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:41.925 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.185 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.185 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.445 nvme0n1 00:28:42.445 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.445 14:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.445 Zero copy mechanism will not be used. 00:28:42.445 Running I/O for 2 seconds... 00:28:44.356 00:28:44.356 Latency(us) 00:28:44.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.356 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:44.357 nvme0n1 : 2.00 4648.02 581.00 0.00 0.00 3436.61 1672.53 10103.47 00:28:44.357 =================================================================================================================== 00:28:44.357 Total : 4648.02 581.00 0.00 0.00 3436.61 1672.53 10103.47 00:28:44.357 0 00:28:44.357 14:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:44.357 14:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:44.623 14:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:44.623 14:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:44.623 | select(.opcode=="crc32c") 00:28:44.623 | "\(.module_name) \(.executed)"' 00:28:44.623 14:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3207127 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3207127 ']' 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3207127 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3207127 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3207127' 00:28:44.623 killing process with pid 3207127 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3207127 00:28:44.623 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.623 00:28:44.623 Latency(us) 00:28:44.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.623 =================================================================================================================== 00:28:44.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.623 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3207127 00:28:44.885 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3205062 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 3205062 ']' 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 3205062 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3205062 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3205062' 00:28:44.886 killing process with pid 3205062 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 3205062 00:28:44.886 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 3205062 00:28:45.146 00:28:45.146 real 0m14.511s 00:28:45.146 user 0m28.734s 00:28:45.146 sys 0m3.384s 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.146 ************************************ 00:28:45.146 END TEST nvmf_digest_clean 00:28:45.146 ************************************ 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:45.146 ************************************ 00:28:45.146 START TEST nvmf_digest_error 00:28:45.146 ************************************ 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3207833 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3207833 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3207833 ']' 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:45.146 14:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.146 [2024-06-10 14:37:22.667945] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:45.146 [2024-06-10 14:37:22.667992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.146 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.406 [2024-06-10 14:37:22.747970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.406 [2024-06-10 14:37:22.812271] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.406 [2024-06-10 14:37:22.812307] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.406 [2024-06-10 14:37:22.812319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.406 [2024-06-10 14:37:22.812325] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.406 [2024-06-10 14:37:22.812331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.406 [2024-06-10 14:37:22.812352] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.977 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.237 [2024-06-10 14:37:23.574517] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.237 null0 00:28:46.237 [2024-06-10 14:37:23.655112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.237 [2024-06-10 14:37:23.679290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3208180 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3208180 /var/tmp/bperf.sock 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3208180 ']' 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:46.237 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.237 [2024-06-10 14:37:23.732692] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:46.237 [2024-06-10 14:37:23.732737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208180 ] 00:28:46.237 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.237 [2024-06-10 14:37:23.789472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.498 [2024-06-10 14:37:23.853420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.498 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:46.498 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:46.498 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.498 14:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.759 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.020 nvme0n1 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.020 14:37:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.020 Running I/O for 2 seconds... 00:28:47.020 [2024-06-10 14:37:24.558900] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.020 [2024-06-10 14:37:24.558939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.020 [2024-06-10 14:37:24.558951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.020 [2024-06-10 14:37:24.571307] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.020 [2024-06-10 14:37:24.571334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.020 [2024-06-10 14:37:24.571344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.020 [2024-06-10 14:37:24.586890] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.020 [2024-06-10 14:37:24.586912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.020 [2024-06-10 14:37:24.586921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.020 [2024-06-10 14:37:24.597405] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.020 [2024-06-10 14:37:24.597426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.020 [2024-06-10 14:37:24.597435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.020 [2024-06-10 14:37:24.611703] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.020 [2024-06-10 14:37:24.611724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.020 [2024-06-10 14:37:24.611733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.623652] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.623674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.623683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.636412] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.636433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.636443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.648122] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.648143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.648153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.660167] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.660188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.660197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.676570] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.676592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.676600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.692094] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.692116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.692124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.703399] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.703420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.703429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.718780] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.718801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.718810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.734008] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.734030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.734038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.745437] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.745457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.758779] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.758800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.758808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.771629] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.771650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.771659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.783277] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.783297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.783310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.796611] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.796633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.796643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.808707] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.808728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.808737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.821377] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.821400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.282 [2024-06-10 14:37:24.821409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.282 [2024-06-10 14:37:24.834212] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.282 [2024-06-10 14:37:24.834233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.283 [2024-06-10 14:37:24.834242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.283 [2024-06-10 14:37:24.845718] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.283 [2024-06-10 14:37:24.845739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.283 [2024-06-10 14:37:24.845747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.283 [2024-06-10 14:37:24.858971] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.283 [2024-06-10 14:37:24.858991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.283 [2024-06-10 14:37:24.859000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.283 [2024-06-10 14:37:24.871078] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.283 [2024-06-10 14:37:24.871099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.283 [2024-06-10 14:37:24.871107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.881523] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.881544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.881553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.896497] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.896523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.896531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.909777] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.909798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.909806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.921728] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.921758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.933692] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.933713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.933722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.947321] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.947342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.947351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.960190] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.960210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.960219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.974545] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.974566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.974574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:24.985577] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:24.985598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:24.985607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.000160] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.000181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.014084] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.014105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.014114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.026271] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.026292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.026300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.038604] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.038625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.038634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.050871] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.050892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.050900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.062572] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.062593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.062602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.077289] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.077309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.077323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.088120] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.088141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.088150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.101146] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.101167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.101175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.114062] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.114087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.114097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.125015] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.125037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.125045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.137044] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.137065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.137073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.149092] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.149113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.149121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.164806] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.164827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.164836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.625 [2024-06-10 14:37:25.177242] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.625 [2024-06-10 14:37:25.177262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.625 [2024-06-10 14:37:25.177270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.626 [2024-06-10 14:37:25.188998] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.626 [2024-06-10 14:37:25.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.626 [2024-06-10 14:37:25.189027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.626 [2024-06-10 14:37:25.202057] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.626 [2024-06-10 14:37:25.202077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.626 [2024-06-10 14:37:25.202086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.626 [2024-06-10 14:37:25.213070] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.626 [2024-06-10 14:37:25.213091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.626 [2024-06-10 14:37:25.213100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.891 [2024-06-10 14:37:25.229776] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.891 [2024-06-10 14:37:25.229797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-06-10 14:37:25.229806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.891 [2024-06-10 14:37:25.241158] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.891 [2024-06-10 14:37:25.241178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.891 [2024-06-10 14:37:25.241187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.891 [2024-06-10 14:37:25.256673] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.256695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.256703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.270339] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.270360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.270368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.281955] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.281977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.281986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.294666] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.294687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.294696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.306572] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.306592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.306600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.319200] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.319221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.319230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.331246] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.331267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.331282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.343149] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.343170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.343178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.354304] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.354328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.354337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.368197] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.368217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.368226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.379800] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.379820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.379828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.392338] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.392359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.392367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.406585] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.406614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.417227] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.417248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.417257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.432766] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.432787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.432796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.446428] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.446452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.446461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.461121] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.461141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.461150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.892 [2024-06-10 14:37:25.472627] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:47.892 [2024-06-10 14:37:25.472647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.892 [2024-06-10 14:37:25.472656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.239 [2024-06-10 14:37:25.488310] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.488335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.488343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.499321] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.499342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.499351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.513837] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.530304] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.530328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.530337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.545499] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.545519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.545528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.555493] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.555514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.569357] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.569386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.584633] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.584654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.584663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.596333] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.596353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.596361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.608844] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.608864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.608872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.621587] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.621607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.621616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.635520] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.635541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.635549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.646814] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.646835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.646843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.659702] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.659723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.659731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.672014] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.672035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.672047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.684708] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.684728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.684737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.696794] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.696815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.696824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.709476] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.709496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.709505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.722069] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.722090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.722098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.733170] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.733190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.733198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.748618] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.748639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.748647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.763142] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.763162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.763171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.779344] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.779365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.779374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.240 [2024-06-10 14:37:25.790519] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.240 [2024-06-10 14:37:25.790546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.240 [2024-06-10 14:37:25.790555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.807062] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.807092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.821617] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.821637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.821646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.834728] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.834748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.834757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.845814] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.845834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.859125] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.859146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.859154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.870298] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.870323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.885959] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.885980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.899915] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.899935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.899947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.910658] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.910679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.910687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.924591] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.924611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.924620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.939558] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.939578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.939587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.952943] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.952963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.952972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.965290] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.965310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.965323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.976813] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.976833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.976842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:25.988909] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:25.988930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:25.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.001249] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.001270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.001279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.014503] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.014527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.014535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.024771] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.024792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.024801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.038875] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.038895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.038904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.050050] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.050071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.050080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.063045] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.063066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.063074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.075686] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.075706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.075714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.501 [2024-06-10 14:37:26.087508] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.501 [2024-06-10 14:37:26.087528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.501 [2024-06-10 14:37:26.087536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.099620] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.099640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.099649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.111988] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.112008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.112017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.123589] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.123609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.123618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.136966] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.136987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.136995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.152929] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.152950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.152958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.167118] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.167139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.179182] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.179203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.179211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.190096] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.190117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.190125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.205754] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.205783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.220098] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.220118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.220127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.231234] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.231254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.231266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.763 [2024-06-10 14:37:26.245489] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.763 [2024-06-10 14:37:26.245510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.763 [2024-06-10 14:37:26.245518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.256987] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.257007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.268672] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.268693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.268701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.281756] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.281778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.281786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.296438] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.296458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.296467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.310612] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.310633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.310642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.322112] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.322132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.322141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.336691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.336712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.336721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.764 [2024-06-10 14:37:26.349206] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:48.764 [2024-06-10 14:37:26.349231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.764 [2024-06-10 14:37:26.349240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.025 [2024-06-10 14:37:26.362393] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.025 [2024-06-10 14:37:26.362414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.025 [2024-06-10 14:37:26.362423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.025 [2024-06-10 14:37:26.375289] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.025 [2024-06-10 14:37:26.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.025 [2024-06-10 14:37:26.375324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.025 [2024-06-10 14:37:26.387373] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.025 [2024-06-10 14:37:26.387394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.025 [2024-06-10 14:37:26.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.025 [2024-06-10 14:37:26.402165] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.025 [2024-06-10 14:37:26.402185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.025 [2024-06-10 14:37:26.402194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.414945] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.414966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.414976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.429682] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.429702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.429711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.443691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.443711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.455260] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.455280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.455289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.468754] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.468774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.468783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.479609] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.479630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.493382] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.493403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.493412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.508836] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.508857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.508865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.524732] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.524753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.524761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 [2024-06-10 14:37:26.539378] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd75990) 00:28:49.026 [2024-06-10 14:37:26.539398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.026 [2024-06-10 14:37:26.539406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.026 00:28:49.026 Latency(us) 00:28:49.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.026 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:49.026 nvme0n1 : 2.01 19505.72 76.19 0.00 0.00 6555.25 3126.61 22500.69 00:28:49.026 =================================================================================================================== 00:28:49.026 Total : 19505.72 76.19 0.00 0.00 6555.25 3126.61 22500.69 00:28:49.026 0 00:28:49.026 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.026 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.026 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.026 | .driver_specific 00:28:49.026 | .nvme_error 00:28:49.026 | .status_code 00:28:49.026 | .command_transient_transport_error' 00:28:49.026 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3208180 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3208180 ']' 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3208180 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3208180 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3208180' 00:28:49.287 killing process with pid 3208180 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3208180 00:28:49.287 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.287 00:28:49.287 Latency(us) 00:28:49.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.287 =================================================================================================================== 00:28:49.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.287 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3208180 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3208819 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3208819 /var/tmp/bperf.sock 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3208819 ']' 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:49.548 14:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.548 [2024-06-10 14:37:27.019622] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:49.548 [2024-06-10 14:37:27.019688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208819 ] 00:28:49.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.548 Zero copy mechanism will not be used. 00:28:49.548 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.548 [2024-06-10 14:37:27.076710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.548 [2024-06-10 14:37:27.140354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.808 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:49.808 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:49.808 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.808 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.069 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.329 nvme0n1 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:50.329 14:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.590 Zero copy mechanism will not be used. 00:28:50.590 Running I/O for 2 seconds... 00:28:50.590 [2024-06-10 14:37:27.957283] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:27.957324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:27.957336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:27.968798] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:27.968823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:27.968832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:27.979423] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:27.979445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:27.979454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:27.990227] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:27.990254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:27.990263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:27.999850] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:27.999872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:27.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.009307] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.009335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.009343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.018659] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.018680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.018689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.029143] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.029164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.029173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.039833] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.039854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.039862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.050228] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.050251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.050260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.060002] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.060023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.060032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.070647] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.070669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.070679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.079670] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.079692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.079700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.088217] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.088238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.088246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.098281] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.098302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.098310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.108180] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.108201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.108209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.118336] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.118357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.118365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.128916] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.128945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.138262] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.138283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.147186] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.147207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.147216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.156691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.156712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.156724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.165966] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.590 [2024-06-10 14:37:28.165987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.590 [2024-06-10 14:37:28.165996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.590 [2024-06-10 14:37:28.175978] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.591 [2024-06-10 14:37:28.175998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.591 [2024-06-10 14:37:28.176007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.185148] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.185178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.194595] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.194616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.194624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.205130] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.205151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.205160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.214480] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.214501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.223263] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.223285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.223293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.232608] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.232630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.232639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.243170] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.243196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.243205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.251422] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.251444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.251452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.260985] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.261007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.261016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.272366] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.272388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.272396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.281907] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.281930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.281938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.289428] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.289450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.289458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.299379] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.299400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.299409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.308005] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.308026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.308034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.317365] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.317387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.317395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.328243] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.328265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.328274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.338579] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.338601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.338610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.348723] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.348745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.348754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.358200] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.358222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.358231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.366385] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.366407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.366416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.376897] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.376919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.376927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.387354] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.387376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.387384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.395344] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.852 [2024-06-10 14:37:28.395365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.852 [2024-06-10 14:37:28.395373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.852 [2024-06-10 14:37:28.404113] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.853 [2024-06-10 14:37:28.404139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.853 [2024-06-10 14:37:28.404147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.853 [2024-06-10 14:37:28.414972] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.853 [2024-06-10 14:37:28.414994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.853 [2024-06-10 14:37:28.415002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.853 [2024-06-10 14:37:28.424970] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.853 [2024-06-10 14:37:28.424992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.853 [2024-06-10 14:37:28.425000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.853 [2024-06-10 14:37:28.433141] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.853 [2024-06-10 14:37:28.433163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.853 [2024-06-10 14:37:28.433171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.853 [2024-06-10 14:37:28.441502] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:50.853 [2024-06-10 14:37:28.441525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.853 [2024-06-10 14:37:28.441533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.453918] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.453940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.453949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.464104] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.464126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.464134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.475192] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.475214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.475222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.484017] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.484040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.484049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.494869] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.494900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.506823] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.506845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.516855] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.516876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.516885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.526667] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.526689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.526698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.538263] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.538285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.538294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.549331] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.549361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.114 [2024-06-10 14:37:28.560012] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.114 [2024-06-10 14:37:28.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.114 [2024-06-10 14:37:28.560043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.569919] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.569950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.579937] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.579959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.579972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.589625] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.589656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.599173] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.599195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.599204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.609196] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.609218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.619733] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.619763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.630439] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.630461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.630469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.640431] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.640453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.649645] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.649666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.649674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.660045] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.660067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.660075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.668895] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.668930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.674890] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.674912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.674920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.685498] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.685521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.685529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.694542] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.694564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.694573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.115 [2024-06-10 14:37:28.703359] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.115 [2024-06-10 14:37:28.703380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.115 [2024-06-10 14:37:28.703389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.715135] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.715166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.725232] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.725254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.731889] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.731910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.739097] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.739119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.739127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.748915] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.748937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.757116] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.757138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.757146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.764605] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.764626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.764635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.770948] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.770970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.770978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.778992] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.779014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.786536] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.786557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.786565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.797869] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.797892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.797901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.808449] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.808471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.808479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.819004] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.819038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.829362] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.829383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.829392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.836823] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.376 [2024-06-10 14:37:28.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.376 [2024-06-10 14:37:28.836853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.376 [2024-06-10 14:37:28.847433] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.847455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.847463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.856518] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.856539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.865804] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.865834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.874820] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.874841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.874850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.883657] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.883678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.883687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.895202] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.895222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.895231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.904601] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.904634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.914716] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.914738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.914746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.924415] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.924436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.924445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.934290] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.934312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.934327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.941800] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.941822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.941830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.950292] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.950319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.950328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.377 [2024-06-10 14:37:28.960581] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.377 [2024-06-10 14:37:28.960603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.377 [2024-06-10 14:37:28.960611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:28.970265] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:28.970287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:28.970295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:28.978256] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:28.978278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:28.978287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:28.986953] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:28.986975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:28.986983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:28.996496] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:28.996518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:28.996527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.005921] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.005943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.005951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.016818] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.016840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.016849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.026599] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.026630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.032933] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.032955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.032963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.040584] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.040606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.040614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.051559] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.051581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.051589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.062278] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.062312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.069066] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.069088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.069096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.074993] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.075023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.083952] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.083973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.083981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.092713] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.092735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.092744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.102221] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.102251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.112740] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.112762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.112770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.122056] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.122078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.122086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.130899] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.130921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.130929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.638 [2024-06-10 14:37:29.140743] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.638 [2024-06-10 14:37:29.140765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.638 [2024-06-10 14:37:29.140774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.149076] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.149098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.149106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.159312] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.159340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.159348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.168666] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.168687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.168695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.174890] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.174912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.180554] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.180575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.180583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.189159] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.189181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.189190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.198638] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.198660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.198668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.209282] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.209304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.209322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.218786] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.218808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.218816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.639 [2024-06-10 14:37:29.226398] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.639 [2024-06-10 14:37:29.226420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.639 [2024-06-10 14:37:29.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.238484] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.238506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.238515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.250182] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.250204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.250213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.259437] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.259459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.259468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.269991] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.270013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.270021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.278790] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.278811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.278820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.286749] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.286779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.295648] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.295673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.295682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.301904] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.301924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.301932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.310000] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.310021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.310030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.320292] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.320327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.328464] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.328485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.328493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.337219] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.337240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.337248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.346551] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.346572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.346581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.352801] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.352823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.352831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.363061] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.363083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.363092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.369007] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.369029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.901 [2024-06-10 14:37:29.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.901 [2024-06-10 14:37:29.374749] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.901 [2024-06-10 14:37:29.374770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.374778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.384344] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.384366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.384375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.393558] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.393579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.393587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.403517] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.403540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.403548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.412531] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.412552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.412560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.422578] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.422608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.431071] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.431092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.436758] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.436779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.436790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.445150] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.445171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.445180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.454218] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.454240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.454248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.460861] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.460882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.460890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.469044] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.469074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.478466] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.478487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.478496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.902 [2024-06-10 14:37:29.486930] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:51.902 [2024-06-10 14:37:29.486951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.902 [2024-06-10 14:37:29.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.499728] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.499751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.499759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.510585] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.510608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.510616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.520848] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.520873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.520881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.529373] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.529402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.538548] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.538569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.538577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.548527] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.548548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.548556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.163 [2024-06-10 14:37:29.556420] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.163 [2024-06-10 14:37:29.556441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.163 [2024-06-10 14:37:29.556449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.565468] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.565488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.565497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.574932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.574953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.574962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.587020] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.587043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.587051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.599617] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.599639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.599648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.611199] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.611221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.611230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.622130] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.622152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.622161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.633624] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.633646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.633655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.644932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.644954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.644963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.651095] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.651117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.659360] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.659390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.667348] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.667370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.667379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.676486] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.676516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.686138] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.686160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.686174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.695791] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.695814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.695822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.705185] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.705206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.705215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.711932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.711955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.711963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.721044] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.721066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.721074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.730460] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.730481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.730489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.738225] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.738247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.738256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.746766] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.746788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.746796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.164 [2024-06-10 14:37:29.756566] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.164 [2024-06-10 14:37:29.756587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.164 [2024-06-10 14:37:29.756596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.766267] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.766289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.766297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.775991] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.776021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.782233] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.782254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.782263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.790806] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.790828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.790836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.797462] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.797484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.797492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.806718] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.806739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.806747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.814487] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.814510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.814519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.824953] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.425 [2024-06-10 14:37:29.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.425 [2024-06-10 14:37:29.824983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.425 [2024-06-10 14:37:29.833828] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.833850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.833862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.843578] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.843600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.843609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.852334] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.852355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.852363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.860434] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.860457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.860465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.870224] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.870247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.870256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.880283] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.880307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.880321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.887999] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.888021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.888030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.899019] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.899041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.899049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.909635] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.909657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.909665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.919887] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.919912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.919921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.930028] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.930049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.930058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.941360] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.941382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.941390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.426 [2024-06-10 14:37:29.951536] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e84670) 00:28:52.426 [2024-06-10 14:37:29.951558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.426 [2024-06-10 14:37:29.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.426 00:28:52.426 Latency(us) 00:28:52.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.426 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:52.426 nvme0n1 : 2.00 3315.36 414.42 0.00 0.00 4820.85 955.73 12670.29 00:28:52.426 =================================================================================================================== 00:28:52.426 Total : 3315.36 414.42 0.00 0.00 4820.85 955.73 12670.29 00:28:52.426 0 00:28:52.426 14:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:52.426 14:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:52.426 14:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:52.426 | .driver_specific 00:28:52.426 | .nvme_error 00:28:52.426 | .status_code 00:28:52.426 | .command_transient_transport_error' 00:28:52.426 14:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3208819 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3208819 ']' 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3208819 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3208819 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3208819' 00:28:52.686 killing process with pid 3208819 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3208819 00:28:52.686 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.686 00:28:52.686 Latency(us) 00:28:52.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.686 =================================================================================================================== 00:28:52.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.686 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3208819 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3209444 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3209444 /var/tmp/bperf.sock 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3209444 ']' 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:52.947 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.947 [2024-06-10 14:37:30.433970] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:52.947 [2024-06-10 14:37:30.434025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209444 ] 00:28:52.947 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.947 [2024-06-10 14:37:30.491000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.208 [2024-06-10 14:37:30.554848] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.208 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.469 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.469 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.469 14:37:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.469 nvme0n1 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:53.469 14:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.729 Running I/O for 2 seconds... 00:28:53.729 [2024-06-10 14:37:31.169637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f81e0 00:28:53.729 [2024-06-10 14:37:31.170571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.170602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.181814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.729 [2024-06-10 14:37:31.182780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.193596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f3e60 00:28:53.729 [2024-06-10 14:37:31.194553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.194573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.205366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f6020 00:28:53.729 [2024-06-10 14:37:31.206326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.206346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.217190] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f81e0 00:28:53.729 [2024-06-10 14:37:31.218126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.218145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.228940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190fa3a0 00:28:53.729 [2024-06-10 14:37:31.229869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.240690] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ed920 00:28:53.729 [2024-06-10 14:37:31.241645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.241666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.252881] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f1868 00:28:53.729 [2024-06-10 14:37:31.253974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.253993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.264807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f2948 00:28:53.729 [2024-06-10 14:37:31.265920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.265940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.275830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f7970 00:28:53.729 [2024-06-10 14:37:31.276931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.276950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.289022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ee190 00:28:53.729 [2024-06-10 14:37:31.290292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.290312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.300773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190fac10 00:28:53.729 [2024-06-10 14:37:31.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.302066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.729 [2024-06-10 14:37:31.312532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ef270 00:28:53.729 [2024-06-10 14:37:31.313815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.729 [2024-06-10 14:37:31.313835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.324285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e5a90 00:28:53.991 [2024-06-10 14:37:31.325564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.335263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190fc998 00:28:53.991 [2024-06-10 14:37:31.336523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.336542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.348369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ea248 00:28:53.991 [2024-06-10 14:37:31.349796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.349817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.360097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e12d8 00:28:53.991 [2024-06-10 14:37:31.361503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.361523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.371804] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e8d30 00:28:53.991 [2024-06-10 14:37:31.373237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.373257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.383565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e49b0 00:28:53.991 [2024-06-10 14:37:31.384960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.384979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.395698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e6b70 00:28:53.991 [2024-06-10 14:37:31.397278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.405357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ddc00 00:28:53.991 [2024-06-10 14:37:31.406275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.406294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.418612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f8618 00:28:53.991 [2024-06-10 14:37:31.420186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.420205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.429561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e5658 00:28:53.991 [2024-06-10 14:37:31.430652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.430672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.441172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e5a90 00:28:53.991 [2024-06-10 14:37:31.442269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.442292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.452928] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e6b70 00:28:53.991 [2024-06-10 14:37:31.454034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.991 [2024-06-10 14:37:31.454054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:53.991 [2024-06-10 14:37:31.466198] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e7c50 00:28:53.991 [2024-06-10 14:37:31.467940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.467959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.477142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190ebfd0 00:28:53.992 [2024-06-10 14:37:31.478406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.478426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.489159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190f7970 00:28:53.992 [2024-06-10 14:37:31.490256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.490276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.499898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e38d0 00:28:53.992 [2024-06-10 14:37:31.501106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.501126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.512846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e73e0 00:28:53.992 [2024-06-10 14:37:31.514318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.514338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.523984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.524303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.524327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.536070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.536279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.536298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.548123] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.548488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.560205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.560551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.560570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.572255] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.572589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.572608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.992 [2024-06-10 14:37:31.584342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:53.992 [2024-06-10 14:37:31.584652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.992 [2024-06-10 14:37:31.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.596393] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.596709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.596729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.608459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.608785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.608804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.620524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.620851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.620871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.632605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.632931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.632951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.644666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.645003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.656738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.657067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.668970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.669204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.669223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.681182] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.681527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.681546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.693246] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.693626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.693646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.705353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.705725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.705744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.717395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.717726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.717746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.729487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.729800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.729819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.741558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.741889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.741909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.753631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.753962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.753984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.765688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.766022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.766041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.777785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.778094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.778113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.789840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.790162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.801957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.802262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.802281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.814032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.814332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.814352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.826087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.826454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.253 [2024-06-10 14:37:31.838172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.253 [2024-06-10 14:37:31.838491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.253 [2024-06-10 14:37:31.838511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.514 [2024-06-10 14:37:31.850233] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.514 [2024-06-10 14:37:31.850561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.514 [2024-06-10 14:37:31.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.514 [2024-06-10 14:37:31.862308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.514 [2024-06-10 14:37:31.862628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.514 [2024-06-10 14:37:31.862647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.514 [2024-06-10 14:37:31.874398] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.514 [2024-06-10 14:37:31.874737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.514 [2024-06-10 14:37:31.874756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.514 [2024-06-10 14:37:31.886461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.514 [2024-06-10 14:37:31.886777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.514 [2024-06-10 14:37:31.886796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.898544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.898852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.898871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.910840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.911143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.911163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.922911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.923216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.923235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.934971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.935302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.935325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.947056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.947385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.947405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.959117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.959422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.959441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.971184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.971505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.971525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.983246] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.983556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:31.995308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:31.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:31.995647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.007364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.007694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.007713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.019448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.019767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.019786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.031503] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.031812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.031830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.043567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.043870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.043889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.055615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.055915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.055933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.067677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.068003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.068022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.079761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.080098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.080117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.091837] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.092164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.092184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.515 [2024-06-10 14:37:32.103891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.515 [2024-06-10 14:37:32.104196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.515 [2024-06-10 14:37:32.104216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.115975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.116290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.116309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.128025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.128343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.128362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.140108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.140486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.140506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.152178] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.152530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.152550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.164257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.164613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.164633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.176325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.176671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.176693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.188368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.188729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.200460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.200834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.200853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.212555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.212874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.212894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.224605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.224939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.224958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.236678] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.237004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.237023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.248739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.249069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.249089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.260824] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.261157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.261175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.272891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.273213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.273233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.284969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.285302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.285327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.297050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.297368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.297387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.309256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.309606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.309625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.321298] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.321692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.321711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.333412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.333758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.333777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.345471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.345678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.345697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.775 [2024-06-10 14:37:32.357527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:54.775 [2024-06-10 14:37:32.357843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.775 [2024-06-10 14:37:32.357862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.035 [2024-06-10 14:37:32.369568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.035 [2024-06-10 14:37:32.369940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.035 [2024-06-10 14:37:32.369960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.035 [2024-06-10 14:37:32.381677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.382003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.393725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.394029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.394048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.405805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.406108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.406127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.417819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.418147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.429910] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.430210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.430229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.441954] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.442261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.442288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.454017] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.454332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.454351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.466087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.466428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.466447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.478164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.478476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.490197] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.490583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.490605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.502253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.502569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.502588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.514293] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.514604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.514623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.526380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.526683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.538447] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.538821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.538840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.550507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.550838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.550857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.562559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.562866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.562885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.574626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.574963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.574983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.586650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.586976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.586996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.598747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.599061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.599080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.610793] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.611102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.611121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.036 [2024-06-10 14:37:32.622834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.036 [2024-06-10 14:37:32.623149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.036 [2024-06-10 14:37:32.623168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.634863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.635167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.635186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.646912] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.647248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.647267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.658963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.659271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.659291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.671047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.671373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.683169] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.683524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.683543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.695217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.695561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.695580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.707256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.707638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.707656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.719326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.719700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.719719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.297 [2024-06-10 14:37:32.731387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.297 [2024-06-10 14:37:32.731705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.297 [2024-06-10 14:37:32.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.743476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.743779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.743798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.755507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.755833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.755852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.767558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.767858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.779615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.779925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.779944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.791680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.792020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.792039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.803728] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.804052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.804077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.815781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.816103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.816122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.827816] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.828179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.839871] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.840168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.840187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.851921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.852250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.852269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.863960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.864263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.876029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.876333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.876352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.298 [2024-06-10 14:37:32.888055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.298 [2024-06-10 14:37:32.888361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.298 [2024-06-10 14:37:32.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.900137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.900532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.900551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.912409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.912788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.912808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.924498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.924842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.924860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.936553] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.936873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.936892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.948619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.948973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.960645] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.960968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.960987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.972716] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.973068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.984797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.985103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.985122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:32.996858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:32.997163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:32.997183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.008911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.009219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.009238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.020922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.021231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.021250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.032972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.033276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.033296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.045014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.045320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.057082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.057457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.057476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.069136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.069440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.069459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.081240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.081595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.093271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.093601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.105331] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.105657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.105676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.117364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.117690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.117712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.129421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.129790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.129809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.559 [2024-06-10 14:37:33.141451] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.559 [2024-06-10 14:37:33.141830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.559 [2024-06-10 14:37:33.141849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.820 [2024-06-10 14:37:33.153528] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.820 [2024-06-10 14:37:33.153859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.820 [2024-06-10 14:37:33.153878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.820 [2024-06-10 14:37:33.165561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fcc10) with pdu=0x2000190e3060 00:28:55.820 [2024-06-10 14:37:33.165860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.820 [2024-06-10 14:37:33.165879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.820 00:28:55.820 Latency(us) 00:28:55.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.820 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.820 nvme0n1 : 2.01 21242.29 82.98 0.00 0.00 6012.75 3003.73 14199.47 00:28:55.820 =================================================================================================================== 00:28:55.820 Total : 21242.29 82.98 0.00 0.00 6012.75 3003.73 14199.47 00:28:55.820 0 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:55.820 | .driver_specific 00:28:55.820 | .nvme_error 00:28:55.820 | .status_code 00:28:55.820 | .command_transient_transport_error' 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3209444 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3209444 ']' 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3209444 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:55.820 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3209444 00:28:56.081 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:56.081 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3209444' 00:28:56.082 killing process with pid 3209444 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3209444 00:28:56.082 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.082 00:28:56.082 Latency(us) 00:28:56.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.082 =================================================================================================================== 00:28:56.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3209444 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3209999 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3209999 /var/tmp/bperf.sock 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 3209999 ']' 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:56.082 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.082 [2024-06-10 14:37:33.635895] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:28:56.082 [2024-06-10 14:37:33.635950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209999 ] 00:28:56.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.082 Zero copy mechanism will not be used. 00:28:56.082 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.342 [2024-06-10 14:37:33.693000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.342 [2024-06-10 14:37:33.756551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.342 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:56.342 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:56.342 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.342 14:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.603 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.174 nvme0n1 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.174 14:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.174 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:57.174 Zero copy mechanism will not be used. 00:28:57.174 Running I/O for 2 seconds... 00:28:57.174 [2024-06-10 14:37:34.636507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.636897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.636929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.649234] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.649703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.649728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.660266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.660644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.660666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.669134] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.669496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.669517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.678759] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.679141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.679162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.689607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.689976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.689997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.695847] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.696084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.696103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.704260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.704329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.704348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.711874] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.174 [2024-06-10 14:37:34.712134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.174 [2024-06-10 14:37:34.712155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.174 [2024-06-10 14:37:34.721078] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.721441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.721461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.727827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.728202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.734971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.735337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.741923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.742185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.742206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.750674] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.750939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.756679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.757050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.757071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.175 [2024-06-10 14:37:34.763526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.175 [2024-06-10 14:37:34.763776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.175 [2024-06-10 14:37:34.763796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.769701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.770072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.776995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.777377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.777398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.785076] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.785446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.785466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.792410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.792791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.792812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.798449] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.798803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.805617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.805985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.806006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.436 [2024-06-10 14:37:34.813849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.436 [2024-06-10 14:37:34.814227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.436 [2024-06-10 14:37:34.814247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.820213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.820598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.820618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.825129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.825386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.825406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.833732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.834102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.834123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.841353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.841731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.841751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.847593] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.847976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.847997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.855158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.855529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.855549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.861569] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.861931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.861951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.869754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.870131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.877639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.878010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.878030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.885055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.885305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.885333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.891418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.891784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.891804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.897941] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.898301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.898326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.903872] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.904104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.904125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.910284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.910526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.910546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.917894] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.918139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.918159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.924775] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.925135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.925155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.931351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.931618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.931648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.940384] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.940738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.940758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.947145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.947387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.947407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.951961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.952226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.958192] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.958523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.958544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.965963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.966198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.966226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.972463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.972807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.972827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.978908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.979260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.979280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.984141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.437 [2024-06-10 14:37:34.984368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.437 [2024-06-10 14:37:34.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.437 [2024-06-10 14:37:34.988752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:34.988975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:34.988995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.438 [2024-06-10 14:37:34.997221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:34.997549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:34.997569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.438 [2024-06-10 14:37:35.003084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:35.003415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:35.003436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.438 [2024-06-10 14:37:35.009571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:35.009846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:35.009867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.438 [2024-06-10 14:37:35.017480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:35.017794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:35.017815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.438 [2024-06-10 14:37:35.024525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.438 [2024-06-10 14:37:35.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.438 [2024-06-10 14:37:35.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.030446] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.030768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.030789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.036609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.036972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.043720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.044070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.044091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.052234] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.052575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.059510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.059868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.059887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.067457] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.067788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.067809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.076749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.077073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.085548] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.085929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.085949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.094134] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.094513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.094534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.102533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.102880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.102901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.112019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.112307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.112331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.120023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.120267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.120294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.129680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.130014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.130035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.138751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.139092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.139113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.699 [2024-06-10 14:37:35.147114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.699 [2024-06-10 14:37:35.147411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.699 [2024-06-10 14:37:35.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.155738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.156007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.156026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.165028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.165336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.165357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.172936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.173239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.173259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.182482] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.182748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.182768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.191949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.192274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.199046] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.199352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.207199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.207494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.207514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.213709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.213957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.213977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.221569] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.222040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.222061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.229310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.229545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.229564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.237162] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.237523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.237543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.244707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.244926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.250883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.251113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.251132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.256520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.256796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.256816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.262254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.262524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.262545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.269688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.269912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.269931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.276602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.276916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.276937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.284079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.284416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.284437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.700 [2024-06-10 14:37:35.289694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.700 [2024-06-10 14:37:35.289915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.700 [2024-06-10 14:37:35.289935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.961 [2024-06-10 14:37:35.295150] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.961 [2024-06-10 14:37:35.295377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.961 [2024-06-10 14:37:35.295396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.961 [2024-06-10 14:37:35.302353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.961 [2024-06-10 14:37:35.302718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.302739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.310215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.310577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.315877] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.316100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.316125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.322308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.322536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.322555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.328917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.329137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.329156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.336377] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.336810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.336831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.343826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.344218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.344239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.349902] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.350239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.350260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.355312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.355539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.355558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.359782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.360121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.360141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.367485] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.367856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.367876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.374225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.374500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.374519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.381240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.381624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.381644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.389416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.389739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.389759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.396799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.397021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.397040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.401623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.401842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.407591] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.407927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.407948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.413970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.414343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.414363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.421441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.421768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.427050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.427381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.427404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.433582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.433932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.433952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.440062] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.440334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.440353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.445985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.446307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.450800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.451138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.451158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.457237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.457481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.463225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.463539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.962 [2024-06-10 14:37:35.463559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.962 [2024-06-10 14:37:35.470976] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.962 [2024-06-10 14:37:35.471196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.471215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.479735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.479966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.479986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.484707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.484923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.484942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.492248] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.492604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.492625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.501488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.501769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.501792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.512152] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.512488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.512508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.521707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.521945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.521965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.532497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.532819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.532839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.540087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.540447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.540467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.546553] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.546905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.546925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.963 [2024-06-10 14:37:35.553214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:57.963 [2024-06-10 14:37:35.553530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.963 [2024-06-10 14:37:35.553550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.560761] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.561133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.561154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.567690] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.568020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.568040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.574221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.574592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.574612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.582032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.582247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.582266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.587288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.587507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.587526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.594011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.594235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.594254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.599498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.599711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.599730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.606888] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.607254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.607274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.614045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.614301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.614334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.621743] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.621810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.621829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.630633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.630700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.639280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.639396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.644566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.644625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.644643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.651856] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.651921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.651939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.658052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.658363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.658382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.665686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.665764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.671659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.671746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.677448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.677553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.684672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.684767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.684786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.690742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.690843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.690861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.697454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.697533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.697552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.704058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.710139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.224 [2024-06-10 14:37:35.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.224 [2024-06-10 14:37:35.710270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.224 [2024-06-10 14:37:35.714984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.715057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.715076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.719448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.719511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.719530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.723576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.723651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.723670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.730508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.730602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.730620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.736960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.737029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.737048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.744021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.751854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.751920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.751939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.758929] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.759000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.759019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.766359] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.766429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.766448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.773139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.773203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.773221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.780508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.780589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.780608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.789198] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.789280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.789306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.794686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.802906] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.802992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.803010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.808090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.808159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.225 [2024-06-10 14:37:35.814426] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.225 [2024-06-10 14:37:35.814484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.225 [2024-06-10 14:37:35.814502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.486 [2024-06-10 14:37:35.820368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.486 [2024-06-10 14:37:35.820450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.486 [2024-06-10 14:37:35.820468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.486 [2024-06-10 14:37:35.825193] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.486 [2024-06-10 14:37:35.825264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.486 [2024-06-10 14:37:35.825283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.486 [2024-06-10 14:37:35.831909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.486 [2024-06-10 14:37:35.831971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.486 [2024-06-10 14:37:35.831991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.486 [2024-06-10 14:37:35.838870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.486 [2024-06-10 14:37:35.838949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.486 [2024-06-10 14:37:35.838971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.847039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.847118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.855322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.855388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.861954] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.862034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.862053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.868442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.868537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.868556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.874669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.874760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.874782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.880176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.880266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.880284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.885973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.886032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.886050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.892515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.892574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.892592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.898617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.898695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.898719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.905172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.905278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.905297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.911090] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.911160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.911178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.916346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.916410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.916429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.924481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.924562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.924581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.933723] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.933820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.933838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.941962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.942035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.942054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.948635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.948857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.958353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.958631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.958650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.966211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.966310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.966334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.973763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.973834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.973852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.983109] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.983183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.990310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.990392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.990419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:35.996752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:35.996840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:35.996859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:36.004620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:36.004723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:36.004741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:36.013187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:36.013287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:36.013305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.487 [2024-06-10 14:37:36.020025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.487 [2024-06-10 14:37:36.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.487 [2024-06-10 14:37:36.020133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.026597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.026918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.026937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.033375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.033446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.033464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.040992] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.041089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.041108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.048209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.048278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.048297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.055096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.055174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.055193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.062616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.062703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.062721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.068742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.069112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.069131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.488 [2024-06-10 14:37:36.075312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.488 [2024-06-10 14:37:36.075398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.488 [2024-06-10 14:37:36.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.081074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.081134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.081153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.088589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.088687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.088711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.094076] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.094141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.094159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.101576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.101891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.101910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.106396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.106606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.106624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.112907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.112984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.119011] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.119082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.119101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.125170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.125247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.125266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.130174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.130246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.130264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.749 [2024-06-10 14:37:36.135935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.749 [2024-06-10 14:37:36.135994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.749 [2024-06-10 14:37:36.136013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.140656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.140884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.140904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.146642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.146721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.146740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.153325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.153448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.160627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.160731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.160750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.166520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.166614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.166632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.172542] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.172613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.172632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.178785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.178852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.178871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.184039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.184138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.184157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.189363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.189431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.189449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.195797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.195865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.195883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.200699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.200769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.200787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.204482] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.204608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.204626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.213266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.213335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.213354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.220537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.220596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.220614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.226990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.227092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.227110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.233108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.233198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.233217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.239751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.239851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.239869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.247245] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.247339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.247364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.253376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.253454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.253472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.259979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.260273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.260292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.267754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.267867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.267885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.274403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.274515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.274534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.283727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.283785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.283803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.291442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.291507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.291526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.298845] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.298928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.298946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.306840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.306943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.306962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.313970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.750 [2024-06-10 14:37:36.314198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.750 [2024-06-10 14:37:36.314217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.750 [2024-06-10 14:37:36.322257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.751 [2024-06-10 14:37:36.322322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.751 [2024-06-10 14:37:36.322341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.751 [2024-06-10 14:37:36.328824] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.751 [2024-06-10 14:37:36.328919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.751 [2024-06-10 14:37:36.328938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.751 [2024-06-10 14:37:36.335239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:58.751 [2024-06-10 14:37:36.335492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.751 [2024-06-10 14:37:36.335511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.751 [2024-06-10 14:37:36.342258] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.342352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.342372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.349252] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.349336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.349355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.355898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.355961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.355980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.364196] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.364288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.364307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.371978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.372039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.372063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.378232] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.378291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.378309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.384218] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.384557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.384576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.389288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.389563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.389583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.395994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.396061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.396079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.012 [2024-06-10 14:37:36.402329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.012 [2024-06-10 14:37:36.402554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.012 [2024-06-10 14:37:36.402572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.409478] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.409566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.409584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.415210] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.415270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.415298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.420914] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.421011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.421029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.425100] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.425179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.425197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.430969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.431075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.431093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.434799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.434884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.434903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.441283] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.441371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.447443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.447534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.447553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.454278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.454618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.454638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.461135] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.461335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.461354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.467022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.467096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.467114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.470531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.470623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.470641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.476774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.477040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.477065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.485032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.485438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.485459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.490547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.490778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.490798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.494298] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.494379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.494398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.497832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.497918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.497936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.503759] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.503839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.503858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.509989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.510232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.510251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.516240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.516329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.516348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.520596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.520689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.520714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.524556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.524682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.524701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.530507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.530684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.530702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.540297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.540540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.540559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.550754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.550865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.013 [2024-06-10 14:37:36.550884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.013 [2024-06-10 14:37:36.561810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.013 [2024-06-10 14:37:36.562150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.014 [2024-06-10 14:37:36.562171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.014 [2024-06-10 14:37:36.570933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.014 [2024-06-10 14:37:36.571011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.014 [2024-06-10 14:37:36.571031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.014 [2024-06-10 14:37:36.579423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.014 [2024-06-10 14:37:36.579638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.014 [2024-06-10 14:37:36.579657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.014 [2024-06-10 14:37:36.589542] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.014 [2024-06-10 14:37:36.589633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.014 [2024-06-10 14:37:36.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.014 [2024-06-10 14:37:36.599660] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.014 [2024-06-10 14:37:36.599799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.014 [2024-06-10 14:37:36.599819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.274 [2024-06-10 14:37:36.610913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.274 [2024-06-10 14:37:36.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.274 [2024-06-10 14:37:36.611187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.274 [2024-06-10 14:37:36.621675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.274 [2024-06-10 14:37:36.621958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.274 [2024-06-10 14:37:36.621978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.274 [2024-06-10 14:37:36.632353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14fd0f0) with pdu=0x2000190fef90 00:28:59.274 [2024-06-10 14:37:36.632703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.274 [2024-06-10 14:37:36.632722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.274 00:28:59.274 Latency(us) 00:28:59.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.274 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:59.274 nvme0n1 : 2.01 4406.78 550.85 0.00 0.00 3621.78 1652.05 12288.00 00:28:59.275 =================================================================================================================== 00:28:59.275 Total : 4406.78 550.85 0.00 0.00 3621.78 1652.05 12288.00 00:28:59.275 0 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.275 | .driver_specific 00:28:59.275 | .nvme_error 00:28:59.275 | .status_code 00:28:59.275 | .command_transient_transport_error' 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 285 > 0 )) 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3209999 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3209999 ']' 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3209999 00:28:59.275 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3209999 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3209999' 00:28:59.535 killing process with pid 3209999 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3209999 00:28:59.535 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.535 00:28:59.535 Latency(us) 00:28:59.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.535 =================================================================================================================== 00:28:59.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.535 14:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3209999 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3207833 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 3207833 ']' 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 3207833 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3207833 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3207833' 00:28:59.535 killing process with pid 3207833 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 3207833 00:28:59.535 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 3207833 00:28:59.796 00:28:59.796 real 0m14.632s 00:28:59.796 user 0m28.878s 00:28:59.796 sys 0m3.364s 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.796 ************************************ 00:28:59.796 END TEST nvmf_digest_error 00:28:59.796 ************************************ 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.796 rmmod nvme_tcp 00:28:59.796 rmmod nvme_fabrics 00:28:59.796 rmmod nvme_keyring 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3207833 ']' 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3207833 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 3207833 ']' 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 3207833 00:28:59.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3207833) - No such process 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 3207833 is not found' 00:28:59.796 Process with pid 3207833 is not found 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.796 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.797 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.797 14:37:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.797 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.797 14:37:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.341 14:37:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:02.341 00:29:02.341 real 0m38.443s 00:29:02.341 user 0m59.579s 00:29:02.341 sys 0m11.969s 00:29:02.341 14:37:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:02.341 14:37:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:02.341 ************************************ 00:29:02.341 END TEST nvmf_digest 00:29:02.341 ************************************ 00:29:02.341 14:37:39 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:02.341 14:37:39 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:02.341 14:37:39 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:02.341 14:37:39 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:02.341 14:37:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:02.341 14:37:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:02.341 14:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.341 ************************************ 00:29:02.341 START TEST nvmf_bdevperf 00:29:02.341 ************************************ 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:02.341 * Looking for test storage... 00:29:02.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.341 14:37:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.979 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:08.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:08.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:08.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:08.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.980 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:29:09.241 00:29:09.241 --- 10.0.0.2 ping statistics --- 00:29:09.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.241 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:09.241 00:29:09.241 --- 10.0.0.1 ping statistics --- 00:29:09.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.241 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3214895 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3214895 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3214895 ']' 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:09.241 14:37:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.241 [2024-06-10 14:37:46.740146] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:09.241 [2024-06-10 14:37:46.740207] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.241 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.241 [2024-06-10 14:37:46.809537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:09.502 [2024-06-10 14:37:46.884023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.502 [2024-06-10 14:37:46.884060] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.502 [2024-06-10 14:37:46.884067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.502 [2024-06-10 14:37:46.884073] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.502 [2024-06-10 14:37:46.884079] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.502 [2024-06-10 14:37:46.884189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.502 [2024-06-10 14:37:46.884362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.502 [2024-06-10 14:37:46.884371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.096 [2024-06-10 14:37:47.664619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.096 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.355 Malloc0 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.355 [2024-06-10 14:37:47.730689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.355 { 00:29:10.355 "params": { 00:29:10.355 "name": "Nvme$subsystem", 00:29:10.355 "trtype": "$TEST_TRANSPORT", 00:29:10.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.355 "adrfam": "ipv4", 00:29:10.355 "trsvcid": "$NVMF_PORT", 00:29:10.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.355 "hdgst": ${hdgst:-false}, 00:29:10.355 "ddgst": ${ddgst:-false} 00:29:10.355 }, 00:29:10.355 "method": "bdev_nvme_attach_controller" 00:29:10.355 } 00:29:10.355 EOF 00:29:10.355 )") 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:10.355 14:37:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.355 "params": { 00:29:10.355 "name": "Nvme1", 00:29:10.355 "trtype": "tcp", 00:29:10.355 "traddr": "10.0.0.2", 00:29:10.355 "adrfam": "ipv4", 00:29:10.355 "trsvcid": "4420", 00:29:10.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.355 "hdgst": false, 00:29:10.355 "ddgst": false 00:29:10.355 }, 00:29:10.355 "method": "bdev_nvme_attach_controller" 00:29:10.355 }' 00:29:10.355 [2024-06-10 14:37:47.783116] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:10.355 [2024-06-10 14:37:47.783162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214948 ] 00:29:10.355 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.355 [2024-06-10 14:37:47.858936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.355 [2024-06-10 14:37:47.923739] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.922 Running I/O for 1 seconds... 00:29:11.859 00:29:11.859 Latency(us) 00:29:11.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:11.859 Verification LBA range: start 0x0 length 0x4000 00:29:11.859 Nvme1n1 : 1.01 9111.37 35.59 0.00 0.00 13987.50 3072.00 15510.19 00:29:11.859 =================================================================================================================== 00:29:11.860 Total : 9111.37 35.59 0.00 0.00 13987.50 3072.00 15510.19 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3215272 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.860 { 00:29:11.860 "params": { 00:29:11.860 "name": "Nvme$subsystem", 00:29:11.860 "trtype": "$TEST_TRANSPORT", 00:29:11.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.860 "adrfam": "ipv4", 00:29:11.860 "trsvcid": "$NVMF_PORT", 00:29:11.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.860 "hdgst": ${hdgst:-false}, 00:29:11.860 "ddgst": ${ddgst:-false} 00:29:11.860 }, 00:29:11.860 "method": "bdev_nvme_attach_controller" 00:29:11.860 } 00:29:11.860 EOF 00:29:11.860 )") 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:11.860 14:37:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:11.860 "params": { 00:29:11.860 "name": "Nvme1", 00:29:11.860 "trtype": "tcp", 00:29:11.860 "traddr": "10.0.0.2", 00:29:11.860 "adrfam": "ipv4", 00:29:11.860 "trsvcid": "4420", 00:29:11.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.860 "hdgst": false, 00:29:11.860 "ddgst": false 00:29:11.860 }, 00:29:11.860 "method": "bdev_nvme_attach_controller" 00:29:11.860 }' 00:29:11.860 [2024-06-10 14:37:49.421175] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:11.860 [2024-06-10 14:37:49.421229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215272 ] 00:29:11.860 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.120 [2024-06-10 14:37:49.497382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.120 [2024-06-10 14:37:49.560793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.379 Running I/O for 15 seconds... 00:29:14.917 14:37:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3214895 00:29:14.917 14:37:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:14.917 [2024-06-10 14:37:52.388427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.917 [2024-06-10 14:37:52.388727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.917 [2024-06-10 14:37:52.388740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.918 [2024-06-10 14:37:52.388750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.388992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.388998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.918 [2024-06-10 14:37:52.389339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.918 [2024-06-10 14:37:52.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.918 [2024-06-10 14:37:52.389372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.918 [2024-06-10 14:37:52.389389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.918 [2024-06-10 14:37:52.389404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.918 [2024-06-10 14:37:52.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.919 [2024-06-10 14:37:52.389580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.919 [2024-06-10 14:37:52.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.389992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.389999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.390008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.390015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.390024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.919 [2024-06-10 14:37:52.390031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.919 [2024-06-10 14:37:52.390040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.920 [2024-06-10 14:37:52.390597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2bee0 is same with the state(5) to be set 00:29:14.920 [2024-06-10 14:37:52.390614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.920 [2024-06-10 14:37:52.390620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.920 [2024-06-10 14:37:52.390626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53488 len:8 PRP1 0x0 PRP2 0x0 00:29:14.920 [2024-06-10 14:37:52.390636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.920 [2024-06-10 14:37:52.390675] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa2bee0 was disconnected and freed. reset controller. 00:29:14.920 [2024-06-10 14:37:52.394262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.920 [2024-06-10 14:37:52.394307] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.920 [2024-06-10 14:37:52.395097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.920 [2024-06-10 14:37:52.395113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.920 [2024-06-10 14:37:52.395121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.395343] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.395559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.395567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.395575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.399068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.408356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.409000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.409037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.409049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.409287] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.409517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.409527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.409535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.413035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.422103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.422806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.422844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.422863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.423099] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.423328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.423337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.423345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.426859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.435930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.436642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.436682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.436692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.436929] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.437150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.437158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.437165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.440676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.449751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.450357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.450398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.450410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.450651] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.450871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.450881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.450888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.454400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.463674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.464333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.464374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.464386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.464628] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.464848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.464864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.464871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.468386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.477462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.477964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.478006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.478018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.478258] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.478490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.478499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.478506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.482010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.491286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.491939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.491985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.491996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.492237] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.492468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.492477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.492484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.921 [2024-06-10 14:37:52.496029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.921 [2024-06-10 14:37:52.505115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.921 [2024-06-10 14:37:52.505819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.921 [2024-06-10 14:37:52.505866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:14.921 [2024-06-10 14:37:52.505878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:14.921 [2024-06-10 14:37:52.506120] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:14.921 [2024-06-10 14:37:52.506352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.921 [2024-06-10 14:37:52.506362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.921 [2024-06-10 14:37:52.506369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.182 [2024-06-10 14:37:52.509877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.182 [2024-06-10 14:37:52.518966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.519659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.519709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.519721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.519965] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.520187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.520196] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.520203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.523723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.532817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.533440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.533487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.533500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.533743] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.533964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.533973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.533981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.537511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.546593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.547290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.547348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.547361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.547606] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.547829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.547838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.547845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.551366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.560456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.561166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.561220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.561233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.561496] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.561720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.561729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.561737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.565249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.574349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.575042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.575099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.575111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.575373] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.575598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.575607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.575615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.579129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.588228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.588945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.589007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.589020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.589273] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.589510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.589520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.589529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.593049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.602147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.602854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.602915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.602927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.603180] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.603415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.603425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.603440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.606966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.616068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.616784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.616841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.616852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.617103] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.617341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.617350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.617358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.620874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.629981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.630717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.630770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.630782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.631029] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.631251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.631260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.631268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.634788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.643880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.644602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.183 [2024-06-10 14:37:52.644659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.183 [2024-06-10 14:37:52.644671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.183 [2024-06-10 14:37:52.644921] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.183 [2024-06-10 14:37:52.645144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.183 [2024-06-10 14:37:52.645155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.183 [2024-06-10 14:37:52.645163] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.183 [2024-06-10 14:37:52.648694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.183 [2024-06-10 14:37:52.657799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.183 [2024-06-10 14:37:52.658459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.658520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.658532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.658785] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.659010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.659019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.659028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.662559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.671658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.672361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.672423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.672437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.672690] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.672914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.672925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.672932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.676466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.685555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.686144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.686172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.686181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.686410] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.686630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.686638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.686645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.690152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.699443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.700120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.700181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.700194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.700460] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.700692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.700701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.700709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.704282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.713194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.713916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.713976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.713988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.714241] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.714479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.714489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.714497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.718015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.727117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.727868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.727928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.727941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.728193] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.728429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.728439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.728447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.731964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.741048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.741765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.741826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.741839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.742092] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.742330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.742339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.742347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.745879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.754971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.755701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.755763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.755775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.756027] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.756252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.756262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.756270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.759812] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.184 [2024-06-10 14:37:52.768904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.184 [2024-06-10 14:37:52.769643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.184 [2024-06-10 14:37:52.769704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.184 [2024-06-10 14:37:52.769717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.184 [2024-06-10 14:37:52.769969] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.184 [2024-06-10 14:37:52.770193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.184 [2024-06-10 14:37:52.770202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.184 [2024-06-10 14:37:52.770210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.184 [2024-06-10 14:37:52.773750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.446 [2024-06-10 14:37:52.782846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.446 [2024-06-10 14:37:52.783453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.446 [2024-06-10 14:37:52.783513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.446 [2024-06-10 14:37:52.783527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.446 [2024-06-10 14:37:52.783780] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.446 [2024-06-10 14:37:52.784004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.446 [2024-06-10 14:37:52.784013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.446 [2024-06-10 14:37:52.784021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.446 [2024-06-10 14:37:52.787550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.446 [2024-06-10 14:37:52.796641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.446 [2024-06-10 14:37:52.797364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.446 [2024-06-10 14:37:52.797425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.446 [2024-06-10 14:37:52.797445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.446 [2024-06-10 14:37:52.797697] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.446 [2024-06-10 14:37:52.797921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.446 [2024-06-10 14:37:52.797930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.446 [2024-06-10 14:37:52.797937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.446 [2024-06-10 14:37:52.801474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.446 [2024-06-10 14:37:52.810560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.446 [2024-06-10 14:37:52.811194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.446 [2024-06-10 14:37:52.811221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.446 [2024-06-10 14:37:52.811230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.446 [2024-06-10 14:37:52.811458] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.446 [2024-06-10 14:37:52.811678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.446 [2024-06-10 14:37:52.811687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.446 [2024-06-10 14:37:52.811694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.446 [2024-06-10 14:37:52.815203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.446 [2024-06-10 14:37:52.824494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.446 [2024-06-10 14:37:52.824985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.446 [2024-06-10 14:37:52.825013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.825021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.825241] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.825474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.825486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.825493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.829027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.838319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.839034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.839095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.839107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.839373] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.839598] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.839614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.839622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.843146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.852087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.852778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.852839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.852852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.853104] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.853344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.853354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.853362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.856879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.865972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.866674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.866734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.866747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.867000] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.867224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.867233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.867240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.870771] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.879864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.880610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.880671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.880684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.880936] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.881160] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.881169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.881177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.884716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.893812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.894458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.894517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.894530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.894783] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.895007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.895017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.895025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.898563] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.907919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.908671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.908728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.908740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.908989] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.909212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.909221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.909228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.912797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.921695] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.922416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.922478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.922491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.922746] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.922970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.922979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.922987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.926531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.935621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.936382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.936443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.936455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.936715] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.936939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.936948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.936956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.940501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.949390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.950012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.950072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.950085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.950352] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.447 [2024-06-10 14:37:52.950577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.447 [2024-06-10 14:37:52.950592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.447 [2024-06-10 14:37:52.950605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.447 [2024-06-10 14:37:52.954128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.447 [2024-06-10 14:37:52.963225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.447 [2024-06-10 14:37:52.963921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.447 [2024-06-10 14:37:52.963982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.447 [2024-06-10 14:37:52.963995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.447 [2024-06-10 14:37:52.964247] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:52.964484] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:52.964495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:52.964503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:52.968035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.448 [2024-06-10 14:37:52.977134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.448 [2024-06-10 14:37:52.977859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.448 [2024-06-10 14:37:52.977920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.448 [2024-06-10 14:37:52.977932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.448 [2024-06-10 14:37:52.978185] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:52.978421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:52.978433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:52.978449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:52.981972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.448 [2024-06-10 14:37:52.991074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.448 [2024-06-10 14:37:52.991699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.448 [2024-06-10 14:37:52.991727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.448 [2024-06-10 14:37:52.991736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.448 [2024-06-10 14:37:52.991955] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:52.992173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:52.992181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:52.992188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:52.995704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.448 [2024-06-10 14:37:53.005006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.448 [2024-06-10 14:37:53.005568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.448 [2024-06-10 14:37:53.005593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.448 [2024-06-10 14:37:53.005601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.448 [2024-06-10 14:37:53.005820] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:53.006037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:53.006047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:53.006054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:53.009568] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.448 [2024-06-10 14:37:53.018873] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.448 [2024-06-10 14:37:53.019597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.448 [2024-06-10 14:37:53.019658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.448 [2024-06-10 14:37:53.019670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.448 [2024-06-10 14:37:53.019923] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:53.020147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:53.020157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:53.020165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:53.023693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.448 [2024-06-10 14:37:53.032816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.448 [2024-06-10 14:37:53.033566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.448 [2024-06-10 14:37:53.033627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.448 [2024-06-10 14:37:53.033640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.448 [2024-06-10 14:37:53.033892] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.448 [2024-06-10 14:37:53.034116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.448 [2024-06-10 14:37:53.034125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.448 [2024-06-10 14:37:53.034133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.448 [2024-06-10 14:37:53.037664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.046585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.047200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.047262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.047277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.047545] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.047770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.047780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.047788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.051304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.060418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.061131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.061194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.061206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.061470] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.061695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.061704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.061712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.065234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.074338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.074931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.074958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.074967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.075186] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.075420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.075431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.075438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.078978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.088286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.088902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.088924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.088932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.089149] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.089376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.089386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.089393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.092899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.102199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.102809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.102830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.102839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.103056] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.103274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.103283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.103290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.106803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.116097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.116802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.116863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.116876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.117128] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.117365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.117375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.117383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.120965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.129907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.130410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.130437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.130446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.130666] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.130885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.130895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.130902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.134423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.143728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.144296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.144327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.144337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.144556] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.144773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.144782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.144789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.148296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.157608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.158282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.158353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.158367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.158620] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.158843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.158853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.158861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.162385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.171514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.172101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.710 [2024-06-10 14:37:53.172128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.710 [2024-06-10 14:37:53.172144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.710 [2024-06-10 14:37:53.172375] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.710 [2024-06-10 14:37:53.172594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.710 [2024-06-10 14:37:53.172604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.710 [2024-06-10 14:37:53.172611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.710 [2024-06-10 14:37:53.176126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.710 [2024-06-10 14:37:53.185437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.710 [2024-06-10 14:37:53.186140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.186201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.186213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.186478] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.186702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.186711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.186719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.190236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.199344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.200066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.200127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.200140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.200404] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.200629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.200639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.200647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.204166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.213268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.213869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.213897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.213907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.214126] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.214352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.214369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.214376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.217888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.227210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.227924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.227985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.227998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.228250] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.228485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.228495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.228503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.232096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.241004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.241625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.241655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.241663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.241884] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.242102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.242111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.242118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.245639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.254943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.255509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.255532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.255540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.255758] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.255975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.255984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.255991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.259508] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.268811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.269404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.269442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.269452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.269684] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.269904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.269914] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.269922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.273440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.282738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.283333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.283353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.283361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.283578] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.283796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.283804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.283812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.287311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.711 [2024-06-10 14:37:53.296800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.711 [2024-06-10 14:37:53.297350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.711 [2024-06-10 14:37:53.297370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.711 [2024-06-10 14:37:53.297378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.711 [2024-06-10 14:37:53.297596] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.711 [2024-06-10 14:37:53.297813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.711 [2024-06-10 14:37:53.297820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.711 [2024-06-10 14:37:53.297827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.711 [2024-06-10 14:37:53.301338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.973 [2024-06-10 14:37:53.310630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.973 [2024-06-10 14:37:53.311172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.973 [2024-06-10 14:37:53.311189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.973 [2024-06-10 14:37:53.311202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.973 [2024-06-10 14:37:53.311425] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.973 [2024-06-10 14:37:53.311642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.973 [2024-06-10 14:37:53.311659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.973 [2024-06-10 14:37:53.311666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.973 [2024-06-10 14:37:53.315172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.973 [2024-06-10 14:37:53.324465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.324999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.325015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.325023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.325238] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.325460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.325470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.325477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.329012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.338302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.338956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.338997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.339011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.339250] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.339482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.339491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.339499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.343002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.352081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.352774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.352814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.352825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.353062] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.353282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.353291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.353303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.356809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.365887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.366435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.366454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.366461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.366678] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.366894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.366901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.366908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.370407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.379686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.380210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.380226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.380233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.380453] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.380669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.380677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.380684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.384184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.393460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.393971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.393985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.393993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.394208] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.394428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.394436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.394443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.397934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.407210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.408349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.408371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.408379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.408600] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.408816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.408824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.408831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.412331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.420987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.421503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.421519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.421527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.421743] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.421958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.421966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.421973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.974 [2024-06-10 14:37:53.425571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.974 [2024-06-10 14:37:53.434856] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.974 [2024-06-10 14:37:53.435419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.974 [2024-06-10 14:37:53.435435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.974 [2024-06-10 14:37:53.435442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.974 [2024-06-10 14:37:53.435658] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.974 [2024-06-10 14:37:53.435874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.974 [2024-06-10 14:37:53.435882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.974 [2024-06-10 14:37:53.435888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.439386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.448658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.449308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.449352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.449362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.449602] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.449822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.449830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.449837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.453335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.462403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.462993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.463011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.463018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.463234] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.463458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.463467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.463473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.466965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.476239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.476882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.476918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.476930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.477166] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.477394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.477403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.477411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.480911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.489989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.490519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.490538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.490546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.490763] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.490978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.490986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.490993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.494495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.503767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.504295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.504310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.504322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.504538] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.504753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.504760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.504766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.508259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.517534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.517955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.517972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.517979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.518194] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.518414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.518422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.518429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.521920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.531414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.531987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.532002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.532009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.532224] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.532444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.532453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.532459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.975 [2024-06-10 14:37:53.535975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.975 [2024-06-10 14:37:53.545251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.975 [2024-06-10 14:37:53.545783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.975 [2024-06-10 14:37:53.545802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.975 [2024-06-10 14:37:53.545810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.975 [2024-06-10 14:37:53.546025] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.975 [2024-06-10 14:37:53.546240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.975 [2024-06-10 14:37:53.546248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.975 [2024-06-10 14:37:53.546255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.976 [2024-06-10 14:37:53.549747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.976 [2024-06-10 14:37:53.559018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.976 [2024-06-10 14:37:53.559662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.976 [2024-06-10 14:37:53.559699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:15.976 [2024-06-10 14:37:53.559709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:15.976 [2024-06-10 14:37:53.559945] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:15.976 [2024-06-10 14:37:53.560165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.976 [2024-06-10 14:37:53.560173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.976 [2024-06-10 14:37:53.560181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.976 [2024-06-10 14:37:53.563686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.572757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.573342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.573361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.573369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.573585] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.573801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.573809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.573816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.577310] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.586595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.587254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.587291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.587303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.587547] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.587772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.587782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.587789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.591290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.600367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.600784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.600804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.600812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.601029] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.601244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.601252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.601259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.604760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.614242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.614805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.614842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.614853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.615088] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.615308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.615325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.615333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.618833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.628107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.628835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.628871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.628882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.629117] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.629345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.629354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.629361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.632858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.641929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.642512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.642531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.642539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.642756] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.642971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.642979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.642986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.646483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.655753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.656263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.656278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.656285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.656505] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.656721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.656729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.656736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.660227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.669505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.670131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.236 [2024-06-10 14:37:53.670168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.236 [2024-06-10 14:37:53.670180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.236 [2024-06-10 14:37:53.670426] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.236 [2024-06-10 14:37:53.670655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.236 [2024-06-10 14:37:53.670663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.236 [2024-06-10 14:37:53.670670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.236 [2024-06-10 14:37:53.674169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.236 [2024-06-10 14:37:53.683240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.236 [2024-06-10 14:37:53.683802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.683821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.683833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.684050] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.684266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.684273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.684280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.687779] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.697053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.697635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.697672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.697684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.697922] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.698142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.698150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.698157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.701664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.710955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.711655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.711692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.711704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.711942] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.712161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.712169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.712177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.715680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.724748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.725416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.725452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.725465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.725703] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.725923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.725932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.725944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.729462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.738534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.739114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.739132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.739139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.739362] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.739579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.739586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.739593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.743084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.752384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.752987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.753023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.753033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.753268] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.753497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.753507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.753514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.757011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.766287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.766835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.766872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.766884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.767120] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.767348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.767358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.767365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.770864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.780140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.780810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.780846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.780857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.781092] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.781312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.781331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.781338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.784836] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.793901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.794591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.794628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.794638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.794873] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.795093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.795101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.795108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.798612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.807689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.237 [2024-06-10 14:37:53.808373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.237 [2024-06-10 14:37:53.808410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.237 [2024-06-10 14:37:53.808421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.237 [2024-06-10 14:37:53.808660] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.237 [2024-06-10 14:37:53.808880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.237 [2024-06-10 14:37:53.808888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.237 [2024-06-10 14:37:53.808895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.237 [2024-06-10 14:37:53.812402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.237 [2024-06-10 14:37:53.821471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.238 [2024-06-10 14:37:53.822137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.238 [2024-06-10 14:37:53.822174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.238 [2024-06-10 14:37:53.822186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.238 [2024-06-10 14:37:53.822437] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.238 [2024-06-10 14:37:53.822658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.238 [2024-06-10 14:37:53.822667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.238 [2024-06-10 14:37:53.822673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.238 [2024-06-10 14:37:53.826171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.835257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.835933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.835969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.835980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.836215] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.836444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.499 [2024-06-10 14:37:53.836453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.499 [2024-06-10 14:37:53.836460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.499 [2024-06-10 14:37:53.839955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.849023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.849595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.849631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.849641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.849877] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.850097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.499 [2024-06-10 14:37:53.850105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.499 [2024-06-10 14:37:53.850112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.499 [2024-06-10 14:37:53.853619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.862895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.863602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.863639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.863649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.863885] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.864104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.499 [2024-06-10 14:37:53.864112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.499 [2024-06-10 14:37:53.864124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.499 [2024-06-10 14:37:53.867631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.876698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.877366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.877403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.877415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.877652] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.877872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.499 [2024-06-10 14:37:53.877880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.499 [2024-06-10 14:37:53.877887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.499 [2024-06-10 14:37:53.881395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.890474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.891124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.891161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.891171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.891415] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.891635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.499 [2024-06-10 14:37:53.891644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.499 [2024-06-10 14:37:53.891651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.499 [2024-06-10 14:37:53.895150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.499 [2024-06-10 14:37:53.904218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.499 [2024-06-10 14:37:53.904864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.499 [2024-06-10 14:37:53.904900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.499 [2024-06-10 14:37:53.904911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.499 [2024-06-10 14:37:53.905146] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.499 [2024-06-10 14:37:53.905574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.905586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.905594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.909095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.917965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.918637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.918678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.918689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.918924] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.919144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.919153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.919160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.922667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.931742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.932386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.932423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.932433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.932669] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.932889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.932897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.932904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.936408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.945479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.946147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.946184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.946194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.946436] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.946657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.946665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.946672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.950171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.959271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.959945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.959981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.959992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.960227] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.960458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.960468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.960475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.963970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.973038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.973577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.973595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.973603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.973820] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.974035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.974043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.974050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.977546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:53.986819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:53.987361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:53.987385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:53.987393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:53.987612] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:53.987830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:53.987839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:53.987845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:53.991346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:54.000618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:54.001185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:54.001201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:54.001209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:54.001429] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:54.001644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:54.001653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:54.001660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:54.005147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:54.014440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:54.014860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:54.014875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:54.014882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:54.015097] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:54.015312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:54.015327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:54.015334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:54.018826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:54.028305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:54.028936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:54.028972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:54.028983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:54.029218] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:54.029448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:54.029457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:54.029464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:54.032966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.500 [2024-06-10 14:37:54.042235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.500 [2024-06-10 14:37:54.042908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.500 [2024-06-10 14:37:54.042945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.500 [2024-06-10 14:37:54.042956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.500 [2024-06-10 14:37:54.043191] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.500 [2024-06-10 14:37:54.043420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.500 [2024-06-10 14:37:54.043429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.500 [2024-06-10 14:37:54.043436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.500 [2024-06-10 14:37:54.046934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.501 [2024-06-10 14:37:54.056001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.501 [2024-06-10 14:37:54.056646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.501 [2024-06-10 14:37:54.056683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.501 [2024-06-10 14:37:54.056698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.501 [2024-06-10 14:37:54.056933] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.501 [2024-06-10 14:37:54.057153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.501 [2024-06-10 14:37:54.057161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.501 [2024-06-10 14:37:54.057168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.501 [2024-06-10 14:37:54.060673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.501 [2024-06-10 14:37:54.069740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.501 [2024-06-10 14:37:54.070385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.501 [2024-06-10 14:37:54.070422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.501 [2024-06-10 14:37:54.070434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.501 [2024-06-10 14:37:54.070671] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.501 [2024-06-10 14:37:54.070890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.501 [2024-06-10 14:37:54.070899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.501 [2024-06-10 14:37:54.070906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.501 [2024-06-10 14:37:54.074414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.501 [2024-06-10 14:37:54.083480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.501 [2024-06-10 14:37:54.084148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.501 [2024-06-10 14:37:54.084184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.501 [2024-06-10 14:37:54.084194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.501 [2024-06-10 14:37:54.084438] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.501 [2024-06-10 14:37:54.084659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.501 [2024-06-10 14:37:54.084668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.501 [2024-06-10 14:37:54.084675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.501 [2024-06-10 14:37:54.088173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.762 [2024-06-10 14:37:54.097252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.762 [2024-06-10 14:37:54.097922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.762 [2024-06-10 14:37:54.097958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.762 [2024-06-10 14:37:54.097969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.762 [2024-06-10 14:37:54.098205] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.762 [2024-06-10 14:37:54.098435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.762 [2024-06-10 14:37:54.098452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.762 [2024-06-10 14:37:54.098460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.762 [2024-06-10 14:37:54.101957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.762 [2024-06-10 14:37:54.111023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.762 [2024-06-10 14:37:54.111601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.762 [2024-06-10 14:37:54.111620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.762 [2024-06-10 14:37:54.111628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.762 [2024-06-10 14:37:54.111845] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.762 [2024-06-10 14:37:54.112060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.762 [2024-06-10 14:37:54.112067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.762 [2024-06-10 14:37:54.112074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.762 [2024-06-10 14:37:54.115569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.762 [2024-06-10 14:37:54.124828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.762 [2024-06-10 14:37:54.125283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.762 [2024-06-10 14:37:54.125298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.762 [2024-06-10 14:37:54.125306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.762 [2024-06-10 14:37:54.125526] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.762 [2024-06-10 14:37:54.125743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.762 [2024-06-10 14:37:54.125750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.762 [2024-06-10 14:37:54.125757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.762 [2024-06-10 14:37:54.129253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.138719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.139241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.139255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.139262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.139483] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.139699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.139707] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.139713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.143198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.152455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.153025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.153039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.153046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.153262] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.153482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.153490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.153496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.156982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.166269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.166847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.166863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.166871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.167086] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.167301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.167309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.167322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.170816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.180096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.180719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.180755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.180765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.181001] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.181221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.181229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.181237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.184741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.194019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.194689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.194726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.194737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.194976] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.195196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.195205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.195212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.198720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.207790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.208473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.208509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.208519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.208754] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.208974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.208982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.208990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.212497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.221561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.222138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.222156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.222164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.222387] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.222603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.222610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.222617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.226107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.235377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.763 [2024-06-10 14:37:54.235934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.763 [2024-06-10 14:37:54.235970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.763 [2024-06-10 14:37:54.235981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.763 [2024-06-10 14:37:54.236216] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.763 [2024-06-10 14:37:54.236445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.763 [2024-06-10 14:37:54.236455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.763 [2024-06-10 14:37:54.236466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.763 [2024-06-10 14:37:54.239963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.763 [2024-06-10 14:37:54.249230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.249904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.249941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.249951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.250186] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.250415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.250424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.250431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.253927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.262990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.263594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.263630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.263640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.263875] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.264095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.264103] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.264110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.267613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.276880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.277415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.277434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.277441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.277657] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.277873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.277880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.277887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.281381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.290647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.291201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.291242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.291252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.291497] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.291717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.291726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.291733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.295230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.304510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.305172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.305209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.305219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.305464] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.305685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.305693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.305700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.309195] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.318261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.319040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.319077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.319088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.319332] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.319552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.319561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.319568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.323063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.332137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.332810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.332847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.332857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.333092] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.333325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.333335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.333342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.336839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.764 [2024-06-10 14:37:54.345907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.764 [2024-06-10 14:37:54.346448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.764 [2024-06-10 14:37:54.346484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:16.764 [2024-06-10 14:37:54.346495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:16.764 [2024-06-10 14:37:54.346730] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:16.764 [2024-06-10 14:37:54.346950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.764 [2024-06-10 14:37:54.346958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.764 [2024-06-10 14:37:54.346964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.764 [2024-06-10 14:37:54.350481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.025 [2024-06-10 14:37:54.359767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.025 [2024-06-10 14:37:54.360424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.025 [2024-06-10 14:37:54.360460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.025 [2024-06-10 14:37:54.360472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.025 [2024-06-10 14:37:54.360709] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.025 [2024-06-10 14:37:54.360929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.025 [2024-06-10 14:37:54.360937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.025 [2024-06-10 14:37:54.360944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.025 [2024-06-10 14:37:54.364452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.025 [2024-06-10 14:37:54.373543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.025 [2024-06-10 14:37:54.374206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.025 [2024-06-10 14:37:54.374243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.025 [2024-06-10 14:37:54.374253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.025 [2024-06-10 14:37:54.374498] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.025 [2024-06-10 14:37:54.374719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.025 [2024-06-10 14:37:54.374727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.025 [2024-06-10 14:37:54.374734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.378230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.387299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.387961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.387998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.388008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.388243] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.388472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.388481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.388489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.391986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.401050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.401639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.401658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.401666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.401882] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.402099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.402107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.402113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.405614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.414875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.415399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.415414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.415422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.415638] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.415853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.415861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.415867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.419360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.428653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.429181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.429195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.429207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.429428] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.429644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.429652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.429658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.433150] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.442426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.443067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.443104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.443114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.443359] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.443580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.443589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.443595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.447096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.456240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.456919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.456956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.456966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.457202] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.457430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.457440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.457447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.460944] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.470002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.470626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.470662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.470673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.470908] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.471128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.471140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.471147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.474651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.483929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.026 [2024-06-10 14:37:54.484554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.026 [2024-06-10 14:37:54.484590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.026 [2024-06-10 14:37:54.484602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.026 [2024-06-10 14:37:54.484838] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.026 [2024-06-10 14:37:54.485058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.026 [2024-06-10 14:37:54.485067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.026 [2024-06-10 14:37:54.485074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.026 [2024-06-10 14:37:54.488580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.026 [2024-06-10 14:37:54.497855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.498497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.498534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.498544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.498779] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.498999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.499007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.499014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.502519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.511594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.512264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.512301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.512311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.512556] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.512776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.512784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.512792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.516288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.525354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.526027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.526064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.526075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.526310] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.526543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.526552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.526559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.530067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.539132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.539717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.539735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.539743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.539959] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.540175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.540183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.540190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.543686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.552945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.553500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.553516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.553524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.553739] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.553955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.553962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.553968] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.557461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.566718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.567239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.567253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.567261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.567486] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.567702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.567710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.567717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.571204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.580497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.581158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.581194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.581204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.581449] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.581670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.581678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.581685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.585181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.594240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.594822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.594840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.594848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.595065] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.595280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.595287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.595294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.598789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.027 [2024-06-10 14:37:54.608046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.027 [2024-06-10 14:37:54.608687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.027 [2024-06-10 14:37:54.608723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.027 [2024-06-10 14:37:54.608733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.027 [2024-06-10 14:37:54.608969] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.027 [2024-06-10 14:37:54.609189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.027 [2024-06-10 14:37:54.609197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.027 [2024-06-10 14:37:54.609208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.027 [2024-06-10 14:37:54.612714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.621796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.622189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.622210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.622218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.622441] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.622658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.622665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.622672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.626167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.635657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.636272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.636308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.636328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.636566] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.636786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.636794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.636801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.640297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.649562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.650108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.650145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.650157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.650402] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.650623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.650631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.650638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.654135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.663408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.664076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.664118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.664128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.664373] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.664594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.664602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.664609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.668106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.677173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.677853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.677890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.677900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.678135] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.678365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.678374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.678381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.681887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.690969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.691655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.691692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.691702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.691937] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.692156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.692165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.692172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.695679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.704752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.705422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.705458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.705469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.705704] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.705928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.705937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.705944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.709452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.290 [2024-06-10 14:37:54.718532] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.290 [2024-06-10 14:37:54.719158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.290 [2024-06-10 14:37:54.719194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.290 [2024-06-10 14:37:54.719207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.290 [2024-06-10 14:37:54.719453] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.290 [2024-06-10 14:37:54.719673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.290 [2024-06-10 14:37:54.719682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.290 [2024-06-10 14:37:54.719689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.290 [2024-06-10 14:37:54.723189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.732282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.732824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.732860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.732871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.733107] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.733337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.733347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.733354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.736861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.746152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.746797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.746834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.746844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.747080] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.747300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.747309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.747324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.750822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.759898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.760561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.760598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.760608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.760844] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.761063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.761071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.761079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.764581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.773653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.774313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.774358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.774369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.774604] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.774824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.774832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.774839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.778346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.787461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.788050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.788068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.788076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.788293] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.788517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.788525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.788532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.792029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.801318] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.801929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.801967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.801985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.802220] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.802451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.802461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.802468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.805971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.815058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.815696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.815733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.815743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.815978] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.816197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.816206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.816214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.819761] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.828869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.829427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.829446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.829454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.829670] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.829886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.829894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.829900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.833418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.842710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.843278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.843293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.843301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.843523] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.843739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.843751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.843757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.847250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.856542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.857180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.857217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.291 [2024-06-10 14:37:54.857228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.291 [2024-06-10 14:37:54.857471] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.291 [2024-06-10 14:37:54.857692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.291 [2024-06-10 14:37:54.857700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.291 [2024-06-10 14:37:54.857707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.291 [2024-06-10 14:37:54.861209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.291 [2024-06-10 14:37:54.870294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.291 [2024-06-10 14:37:54.870949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.291 [2024-06-10 14:37:54.870985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.292 [2024-06-10 14:37:54.870997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.292 [2024-06-10 14:37:54.871236] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.292 [2024-06-10 14:37:54.871464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.292 [2024-06-10 14:37:54.871474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.292 [2024-06-10 14:37:54.871482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.292 [2024-06-10 14:37:54.874983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.884071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.884654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.884672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.884680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.884896] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.885111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.885120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.885127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.888630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.897949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.898461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.898497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.898508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.898744] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.898963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.898971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.898978] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.902483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.911744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.912288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.912307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.912321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.912538] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.912754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.912761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.912768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.916259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.925542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.926173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.926210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.926221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.926465] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.926686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.926695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.926702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.930217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.939305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.939984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.940021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.940032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.940275] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.940505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.940515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.940522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.944029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.953111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.552 [2024-06-10 14:37:54.953668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.552 [2024-06-10 14:37:54.953686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.552 [2024-06-10 14:37:54.953693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.552 [2024-06-10 14:37:54.953909] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.552 [2024-06-10 14:37:54.954125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.552 [2024-06-10 14:37:54.954132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.552 [2024-06-10 14:37:54.954139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.552 [2024-06-10 14:37:54.957642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.552 [2024-06-10 14:37:54.966930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:54.967574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:54.967611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:54.967622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:54.967857] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:54.968076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:54.968085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:54.968092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:54.971604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:54.980685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:54.981379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:54.981416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:54.981428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:54.981666] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:54.981887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:54.981896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:54.981907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:54.985416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:54.994519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:54.995142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:54.995178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:54.995189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:54.995431] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:54.995651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:54.995660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:54.995667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:54.999167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.008448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.008990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.009008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.009015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.009231] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.009453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.009462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.009470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.012963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.022241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.022757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.022773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.022780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.022996] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.023211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.023219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.023226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.026722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.036024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.036687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.036729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.036740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.036975] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.037196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.037205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.037212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.040718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.049794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.050328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.050347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.050354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.050571] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.050786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.050795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.050801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.054293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.063567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.064083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.064098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.064105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.064326] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.064542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.064550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.064556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.068051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.077334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.077863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.077877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.077885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.078099] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.078325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.078333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.078340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.081832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.091110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.091723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.091759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.553 [2024-06-10 14:37:55.091771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.553 [2024-06-10 14:37:55.092008] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.553 [2024-06-10 14:37:55.092228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.553 [2024-06-10 14:37:55.092237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.553 [2024-06-10 14:37:55.092244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.553 [2024-06-10 14:37:55.095755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.553 [2024-06-10 14:37:55.105039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.553 [2024-06-10 14:37:55.105616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.553 [2024-06-10 14:37:55.105635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.554 [2024-06-10 14:37:55.105642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.554 [2024-06-10 14:37:55.105859] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.554 [2024-06-10 14:37:55.106075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.554 [2024-06-10 14:37:55.106083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.554 [2024-06-10 14:37:55.106089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.554 [2024-06-10 14:37:55.109595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.554 [2024-06-10 14:37:55.118869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.554 [2024-06-10 14:37:55.119467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.554 [2024-06-10 14:37:55.119503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.554 [2024-06-10 14:37:55.119515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.554 [2024-06-10 14:37:55.119754] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.554 [2024-06-10 14:37:55.119973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.554 [2024-06-10 14:37:55.119982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.554 [2024-06-10 14:37:55.119989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.554 [2024-06-10 14:37:55.123501] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.554 [2024-06-10 14:37:55.132790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.554 [2024-06-10 14:37:55.133369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.554 [2024-06-10 14:37:55.133406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.554 [2024-06-10 14:37:55.133418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.554 [2024-06-10 14:37:55.133657] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.554 [2024-06-10 14:37:55.133877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.554 [2024-06-10 14:37:55.133886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.554 [2024-06-10 14:37:55.133893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.554 [2024-06-10 14:37:55.137399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.814 [2024-06-10 14:37:55.146680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.814 [2024-06-10 14:37:55.147218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.814 [2024-06-10 14:37:55.147236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.814 [2024-06-10 14:37:55.147244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.814 [2024-06-10 14:37:55.147465] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.814 [2024-06-10 14:37:55.147682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.814 [2024-06-10 14:37:55.147690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.814 [2024-06-10 14:37:55.147697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.814 [2024-06-10 14:37:55.151191] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.814 [2024-06-10 14:37:55.160469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.814 [2024-06-10 14:37:55.161004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.161041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.161053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.161290] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.161518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.161528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.161535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.165034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.174319] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.174989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.175025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.175040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.175276] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.175504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.175513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.175520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.179019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.188088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.188649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.188685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.188696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.188931] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.189151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.189160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.189167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.192673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.201975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.202647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.202684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.202696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.202935] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.203154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.203162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.203169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.206673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.215775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.216295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.216312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.216326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.216542] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.216757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.216769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.216776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.220269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.229552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.230107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.230122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.230129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.230350] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.230577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.230586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.230592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.234082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.243357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.243890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.243905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.243912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.244128] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.244347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.244356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.244363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.247854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.257128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.257826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.257862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.257873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.258109] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.258336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.258345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.258352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.261848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.270921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.271644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.271681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.271692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.271927] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.272147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.272155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.272162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.275668] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.284806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.285427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.285464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.815 [2024-06-10 14:37:55.285476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.815 [2024-06-10 14:37:55.285715] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.815 [2024-06-10 14:37:55.285935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.815 [2024-06-10 14:37:55.285943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.815 [2024-06-10 14:37:55.285950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.815 [2024-06-10 14:37:55.289457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.815 [2024-06-10 14:37:55.298734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.815 [2024-06-10 14:37:55.299321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.815 [2024-06-10 14:37:55.299339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.299347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.299563] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.299778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.299795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.299802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.303296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.312571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.313224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.313261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.313272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.313523] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.313744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.313753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.313760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.317255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.326328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.326968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.327005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.327016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.327251] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.327478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.327488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.327495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.331010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.340084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.340694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.340731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.340743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.340981] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.341201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.341210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.341217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.344721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.353996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.354467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.354486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.354494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.354711] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.354926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.354934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.354945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.358445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.367926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.368478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.368514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.368526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.368765] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.368985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.368993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.369000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 [2024-06-10 14:37:55.372505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 [2024-06-10 14:37:55.381783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.382561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.382597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.382608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.382844] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.383063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.383072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.816 [2024-06-10 14:37:55.383079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3214895 Killed "${NVMF_APP[@]}" "$@" 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.816 [2024-06-10 14:37:55.386584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3216579 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3216579 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 3216579 ']' 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:17.816 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.816 [2024-06-10 14:37:55.395657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.816 [2024-06-10 14:37:55.396103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.816 [2024-06-10 14:37:55.396121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:17.816 [2024-06-10 14:37:55.396130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:17.816 [2024-06-10 14:37:55.396353] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:17.816 [2024-06-10 14:37:55.396569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.816 [2024-06-10 14:37:55.396577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.817 [2024-06-10 14:37:55.396584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.817 [2024-06-10 14:37:55.400078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.077 [2024-06-10 14:37:55.409623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.077 [2024-06-10 14:37:55.410039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.077 [2024-06-10 14:37:55.410058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.077 [2024-06-10 14:37:55.410066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.077 [2024-06-10 14:37:55.410284] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.077 [2024-06-10 14:37:55.410510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.077 [2024-06-10 14:37:55.410519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.077 [2024-06-10 14:37:55.410526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.077 [2024-06-10 14:37:55.414020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.423502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.424134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.424171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.424183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.424429] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.424650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.424659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.424666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.428168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.437254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.437931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.437968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.437984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.438219] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.438446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.438457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.438466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.441967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.444884] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:18.078 [2024-06-10 14:37:55.444954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.078 [2024-06-10 14:37:55.451044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.451608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.451627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.451635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.451851] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.452067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.452076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.452082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.455581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.464853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.465428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.465444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.465451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.465667] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.465882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.465889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.465896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.469391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.478657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.479066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.479081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.479092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.479308] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.479528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.479536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.479543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.078 [2024-06-10 14:37:55.483038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.492400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.492891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.492908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.492915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.493130] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.493351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.493360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.493367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.496859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.506132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.506555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.506570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.506577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.506793] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.507008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.507015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.507022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.510517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.512665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.078 [2024-06-10 14:37:55.519999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.520494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.520532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.520543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.520783] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.521011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.521020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.521027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.524534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.533832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.534432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.534469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.534481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.534722] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.534941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.534951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.078 [2024-06-10 14:37:55.534959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.078 [2024-06-10 14:37:55.538469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.078 [2024-06-10 14:37:55.547747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.078 [2024-06-10 14:37:55.548440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.078 [2024-06-10 14:37:55.548477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.078 [2024-06-10 14:37:55.548489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.078 [2024-06-10 14:37:55.548729] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.078 [2024-06-10 14:37:55.548949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.078 [2024-06-10 14:37:55.548958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.548965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.552472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.561544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.562057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.562094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.562106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.562352] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.562572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.562582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.562589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.566087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.575374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.576048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.576084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.576095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.576340] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.576562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.576570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.576578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.576617] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.079 [2024-06-10 14:37:55.576641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.079 [2024-06-10 14:37:55.576648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.079 [2024-06-10 14:37:55.576654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.079 [2024-06-10 14:37:55.576659] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.079 [2024-06-10 14:37:55.576778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.079 [2024-06-10 14:37:55.576933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.079 [2024-06-10 14:37:55.576934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.079 [2024-06-10 14:37:55.580080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.589157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.589806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.589845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.589856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.590092] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.590313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.590329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.590337] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.593839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.602909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.603576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.603613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.603624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.603860] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.604080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.604094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.604102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.607608] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.616728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.617191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.617208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.617216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.617438] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.617655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.617662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.617669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.621162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.630652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.631241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.631257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.631264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.631485] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.631702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.631709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.631716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.635207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.644479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.644895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.644910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.644917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.645132] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.645353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.645369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.645376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.648865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 [2024-06-10 14:37:55.658361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.079 [2024-06-10 14:37:55.658908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.079 [2024-06-10 14:37:55.658924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.079 [2024-06-10 14:37:55.658931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.079 [2024-06-10 14:37:55.659146] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.079 [2024-06-10 14:37:55.659366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.079 [2024-06-10 14:37:55.659375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.079 [2024-06-10 14:37:55.659383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.079 [2024-06-10 14:37:55.662875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.079 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:18.079 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:18.079 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:18.079 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:18.079 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.340 [2024-06-10 14:37:55.672151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.340 [2024-06-10 14:37:55.672712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-06-10 14:37:55.672728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.340 [2024-06-10 14:37:55.672735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.340 [2024-06-10 14:37:55.672951] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.340 [2024-06-10 14:37:55.673166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.340 [2024-06-10 14:37:55.673175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.340 [2024-06-10 14:37:55.673182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.340 [2024-06-10 14:37:55.676677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.340 [2024-06-10 14:37:55.685949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.340 [2024-06-10 14:37:55.686597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-06-10 14:37:55.686634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.340 [2024-06-10 14:37:55.686645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.340 [2024-06-10 14:37:55.686881] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.340 [2024-06-10 14:37:55.687101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.340 [2024-06-10 14:37:55.687110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.340 [2024-06-10 14:37:55.687117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.340 [2024-06-10 14:37:55.690626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.340 [2024-06-10 14:37:55.699711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.340 [2024-06-10 14:37:55.700309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-06-10 14:37:55.700334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.340 [2024-06-10 14:37:55.700342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.340 [2024-06-10 14:37:55.700558] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.340 [2024-06-10 14:37:55.700775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.340 [2024-06-10 14:37:55.700783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.340 [2024-06-10 14:37:55.700791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.340 [2024-06-10 14:37:55.704287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.340 [2024-06-10 14:37:55.710528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.340 [2024-06-10 14:37:55.713566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.340 [2024-06-10 14:37:55.714137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.340 [2024-06-10 14:37:55.714152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.340 [2024-06-10 14:37:55.714159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.340 [2024-06-10 14:37:55.714379] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.340 [2024-06-10 14:37:55.714595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.340 [2024-06-10 14:37:55.714603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.340 [2024-06-10 14:37:55.714609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.340 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.341 [2024-06-10 14:37:55.718104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.341 [2024-06-10 14:37:55.727381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.341 [2024-06-10 14:37:55.728026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-06-10 14:37:55.728063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.341 [2024-06-10 14:37:55.728073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.341 [2024-06-10 14:37:55.728308] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.341 [2024-06-10 14:37:55.728537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.341 [2024-06-10 14:37:55.728550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.341 [2024-06-10 14:37:55.728558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.341 [2024-06-10 14:37:55.732068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.341 [2024-06-10 14:37:55.741141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.341 [2024-06-10 14:37:55.741837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-06-10 14:37:55.741874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.341 [2024-06-10 14:37:55.741886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.341 [2024-06-10 14:37:55.742126] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.341 [2024-06-10 14:37:55.742353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.341 [2024-06-10 14:37:55.742362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.341 [2024-06-10 14:37:55.742370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.341 Malloc0 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.341 [2024-06-10 14:37:55.745868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.341 [2024-06-10 14:37:55.754938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.341 [2024-06-10 14:37:55.755614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-06-10 14:37:55.755651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.341 [2024-06-10 14:37:55.755662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.341 [2024-06-10 14:37:55.755897] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.341 [2024-06-10 14:37:55.756117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.341 [2024-06-10 14:37:55.756125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.341 [2024-06-10 14:37:55.756133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.341 [2024-06-10 14:37:55.759639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.341 [2024-06-10 14:37:55.768711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.341 [2024-06-10 14:37:55.769398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.341 [2024-06-10 14:37:55.769436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7d3840 with addr=10.0.0.2, port=4420 00:29:18.341 [2024-06-10 14:37:55.769446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d3840 is same with the state(5) to be set 00:29:18.341 [2024-06-10 14:37:55.769682] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d3840 (9): Bad file descriptor 00:29:18.341 [2024-06-10 14:37:55.769902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.341 [2024-06-10 14:37:55.769910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.341 [2024-06-10 14:37:55.769918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.341 [2024-06-10 14:37:55.773426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.341 [2024-06-10 14:37:55.774375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.341 14:37:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3215272 00:29:18.341 [2024-06-10 14:37:55.782502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.341 [2024-06-10 14:37:55.910655] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:28.337 00:29:28.337 Latency(us) 00:29:28.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.337 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:28.337 Verification LBA range: start 0x0 length 0x4000 00:29:28.337 Nvme1n1 : 15.02 6998.26 27.34 8705.62 0.00 8125.51 788.48 17039.36 00:29:28.337 =================================================================================================================== 00:29:28.337 Total : 6998.26 27.34 8705.62 0.00 8125.51 788.48 17039.36 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.337 rmmod nvme_tcp 00:29:28.337 rmmod nvme_fabrics 00:29:28.337 rmmod nvme_keyring 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3216579 ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 3216579 ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3216579' 00:29:28.337 killing process with pid 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 3216579 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.337 14:38:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.248 14:38:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.248 00:29:30.248 real 0m27.846s 00:29:30.248 user 1m4.048s 00:29:30.248 sys 0m6.916s 00:29:30.248 14:38:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:30.248 14:38:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.248 ************************************ 00:29:30.248 END TEST nvmf_bdevperf 00:29:30.248 ************************************ 00:29:30.248 14:38:07 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:30.248 14:38:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:30.248 14:38:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:30.248 14:38:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.248 ************************************ 00:29:30.248 START TEST nvmf_target_disconnect 00:29:30.248 ************************************ 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:30.248 * Looking for test storage... 00:29:30.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.248 14:38:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.249 14:38:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.249 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.249 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.249 14:38:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.249 14:38:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:36.863 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.863 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:36.863 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:36.863 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:36.864 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:36.864 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:36.864 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:36.864 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.864 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:37.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:29:37.124 00:29:37.124 --- 10.0.0.2 ping statistics --- 00:29:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.124 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:29:37.124 00:29:37.124 --- 10.0.0.1 ping statistics --- 00:29:37.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.124 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:37.124 14:38:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:37.125 ************************************ 00:29:37.125 START TEST nvmf_target_disconnect_tc1 00:29:37.125 ************************************ 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:37.125 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.385 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.385 [2024-06-10 14:38:14.790566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.385 [2024-06-10 14:38:14.790655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdbb1d0 with addr=10.0.0.2, port=4420 00:29:37.385 [2024-06-10 14:38:14.790697] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:37.385 [2024-06-10 14:38:14.790715] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:37.385 [2024-06-10 14:38:14.790723] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:37.385 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:37.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:37.385 Initializing NVMe Controllers 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:37.385 00:29:37.385 real 0m0.130s 00:29:37.385 user 0m0.049s 00:29:37.385 sys 0m0.080s 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:37.385 ************************************ 00:29:37.385 END TEST nvmf_target_disconnect_tc1 00:29:37.385 ************************************ 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:37.385 14:38:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:37.385 ************************************ 00:29:37.386 START TEST nvmf_target_disconnect_tc2 00:29:37.386 ************************************ 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3222522 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3222522 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3222522 ']' 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:37.386 14:38:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.386 [2024-06-10 14:38:14.944331] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:37.386 [2024-06-10 14:38:14.944386] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.386 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.647 [2024-06-10 14:38:15.029645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.647 [2024-06-10 14:38:15.125785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.647 [2024-06-10 14:38:15.125839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.647 [2024-06-10 14:38:15.125847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.647 [2024-06-10 14:38:15.125855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.647 [2024-06-10 14:38:15.125861] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.647 [2024-06-10 14:38:15.126560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:37.647 [2024-06-10 14:38:15.126791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:37.647 [2024-06-10 14:38:15.127004] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:37.647 [2024-06-10 14:38:15.127011] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.218 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:38.218 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:38.218 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:38.218 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:38.218 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 Malloc0 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 [2024-06-10 14:38:15.870563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 [2024-06-10 14:38:15.910936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3222674 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:38.480 14:38:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.480 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.396 14:38:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3222522 00:29:40.396 14:38:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:40.396 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Write completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 Read completed with error (sct=0, sc=8) 00:29:40.397 starting I/O failed 00:29:40.397 [2024-06-10 14:38:17.945427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.397 [2024-06-10 14:38:17.945919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.945955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.946178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.946186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.946328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.946342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.946772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.946806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.947124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.947134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.947619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.947652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.947966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.947976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.948281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.948289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.948708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.948742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.949049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.949059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.949593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.949627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.949866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.949876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.950167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.950175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.950420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.950428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.950783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.950791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.951100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.951107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.951305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.951313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.951557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.951565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.951855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.951862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.952185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.952192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.952489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.952497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.952844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.952852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.953152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.953160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.953491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.953499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.953838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.953845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.954142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.954150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.397 [2024-06-10 14:38:17.954501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.397 [2024-06-10 14:38:17.954508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.397 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.954801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.954809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.954987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.954996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.955282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.955290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.955577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.955586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.955776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.955785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.956117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.956124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.956399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.956409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.956723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.956730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.957047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.957055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.957362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.957370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.957671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.957679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.957966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.957973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.958368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.958375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.958624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.958631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.958849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.958856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.959208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.959217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.959529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.959537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.959856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.959863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.960174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.960181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.960526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.960534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.960712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.960720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.960909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.960916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.961260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.961268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.961445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.961453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.961661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.961669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.961972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.961980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.962059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.962066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.962237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.962245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.962656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.962664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.963010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.963018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.963332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.963340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.963732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.963739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.963900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.963908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.964224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.964231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.964551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.964558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.964781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.964789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.965092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.398 [2024-06-10 14:38:17.965099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.398 qpair failed and we were unable to recover it. 00:29:40.398 [2024-06-10 14:38:17.965406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.965413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.965731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.965737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.966020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.966027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.966376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.966382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.966690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.966697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.966979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.966986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.967276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.967283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.967515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.967522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.967841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.967848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.968157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.968164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.968483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.968490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.968769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.968775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.969079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.969085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.969276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.969283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.969493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.969499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.969806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.969812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.970159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.970166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.970441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.970447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.970805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.970814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.971111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.971117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.971281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.971288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.971472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.971479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.971672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.971679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.972019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.972026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.972342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.972349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.972725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.972732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.972907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.972914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.973202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.973210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.973549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.973556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.973849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.973857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.974176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.974183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.974489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.974497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.974808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.974815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.975145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.975152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.975467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.975474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.975773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.975780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.976094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.976100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.976379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.399 [2024-06-10 14:38:17.976386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.399 qpair failed and we were unable to recover it. 00:29:40.399 [2024-06-10 14:38:17.976693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.976701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.976882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.976890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.977211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.977217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.977589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.977596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.977925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.977932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.978225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.978233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.978547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.978554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.978884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.978891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.979095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.979102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.979313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.979324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.979633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.979640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.979943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.979950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.980251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.980258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.980563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.980570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.980871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.980878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.981172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.981178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.981490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.981497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.981786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.981793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.982096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.982103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.982386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.982393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.982562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.982572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.982873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.982880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.983073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.983080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.983389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.983396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.983712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.983719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.984038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.984045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.984248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.984255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.984549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.984556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.984851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.984858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.985021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.985029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.985344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.985351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.985587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.985594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.985955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.985963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.986273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.986280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.986645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.986653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.986957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.986965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.987268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.987275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.987579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.400 [2024-06-10 14:38:17.987587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.400 qpair failed and we were unable to recover it. 00:29:40.400 [2024-06-10 14:38:17.987885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.401 [2024-06-10 14:38:17.987892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.401 qpair failed and we were unable to recover it. 00:29:40.401 [2024-06-10 14:38:17.988200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.401 [2024-06-10 14:38:17.988208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.401 qpair failed and we were unable to recover it. 00:29:40.401 [2024-06-10 14:38:17.988511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.401 [2024-06-10 14:38:17.988519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.988816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.988825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.989151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.989159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.989467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.989476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.989804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.989812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.990152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.990158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.990470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.990478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.990807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.990814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.991114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.991120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.991421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.991427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.991733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.991740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.992028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.992034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.992342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.992349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.992529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.992536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.992856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.992863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.993171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.993178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.993438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.993445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.993780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.993786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.994091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.994097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.994479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.994485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.994742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.994751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.995073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.995080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.995376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.995383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.995701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.995708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.996029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.996037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.996331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.996338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.996628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.996635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.996791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.996797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.997119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.997126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.997300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.676 [2024-06-10 14:38:17.997307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.676 qpair failed and we were unable to recover it. 00:29:40.676 [2024-06-10 14:38:17.997644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.997651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.997952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.997960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.998284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.998292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.998589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.998596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.998897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.998904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.999241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.999251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.999594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.999602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:17.999939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:17.999947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.000155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.000164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.000461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.000468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.000777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.000785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.001085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.001092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.001384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.001391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.001731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.001739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.002050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.002058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.002370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.002378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.002692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.002699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.002862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.002869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.003184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.003191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.003484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.003491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.003825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.003832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.004135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.004142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.004439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.004446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.004754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.004761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.004980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.004988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.005302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.005309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.005608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.005616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.005926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.005933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.006253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.006261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.006616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.006623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.006958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.006968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.007246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.007254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.007570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.007577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.007891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.007898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.008090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.008099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.008397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.008405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.008731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.008738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.677 [2024-06-10 14:38:18.009056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.677 [2024-06-10 14:38:18.009063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.677 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.009386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.009394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.009699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.009707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.010007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.010014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.010345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.010352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.010660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.010668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.010962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.010969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.011290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.011297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.011573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.011580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.011884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.011893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.012208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.012215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.012415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.012422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.012592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.012599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.012941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.012948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.013266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.013274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.013597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.013606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.013911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.013918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.014227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.014235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.014552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.014559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.014874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.014881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.015162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.015170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.015489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.015496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.015833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.015840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.016148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.016156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.016389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.016397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.016593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.016599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.016765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.016773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.017063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.017072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.017386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.017393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.017712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.017719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.018024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.678 [2024-06-10 14:38:18.018030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.678 qpair failed and we were unable to recover it. 00:29:40.678 [2024-06-10 14:38:18.018354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.018362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.018686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.018693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.018998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.019007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.019191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.019198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.019534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.019541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.019870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.019877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.020193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.020199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.020535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.020542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.020834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.020842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.021160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.021166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.021480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.021488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.021646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.021654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.021961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.021968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.022167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.022174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.022501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.022509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.022834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.022841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.023151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.023158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.023351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.023359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.023546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.023553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.023886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.023893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.024193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.024200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.024493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.024500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.024857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.024864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.025164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.025178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.025484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.025491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.025791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.025799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.026101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.026107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.026421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.026428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.026741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.026748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.027056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.027063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.027374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.027380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.027667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.027674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.028030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.028037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.028327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.028335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.679 [2024-06-10 14:38:18.028641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.679 [2024-06-10 14:38:18.028647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.679 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.028941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.028947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.029259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.029266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.029593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.029600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.029935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.029944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.030252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.030258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.030574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.030581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.030800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.030809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.031069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.031076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.031402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.031409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.031729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.031736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.032043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.032051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.032362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.032370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.032662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.032669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.032992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.032999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.033321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.033328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.033565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.033572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.033885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.033893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.034205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.034213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.034396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.034404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.034584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.034591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.034886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.034893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.035179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.035187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.035390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.035399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.035621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.035627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.035966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.035974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.036298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.036305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.036643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.036651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.036944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.036952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.037267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.037276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.037599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.037607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.037910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.037918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.038227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.038234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.038542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.038550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.038839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.038846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.039153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.039163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.039434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.039442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.680 qpair failed and we were unable to recover it. 00:29:40.680 [2024-06-10 14:38:18.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.680 [2024-06-10 14:38:18.039818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.040119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.040126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.040447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.040454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.040757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.040763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.041161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.041168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.041495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.041502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.041709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.041716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.042001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.042008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.042328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.042337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.042673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.042682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.042988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.042994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.043308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.043318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.043641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.043648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.043957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.043965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.044180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.044187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.044499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.044507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.044799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.044806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.045138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.045465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.045471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.045774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.045782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.046102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.046110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.046409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.046416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.046720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.046728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.047039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.047048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.047384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.047391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.047779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.047790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.047960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.047968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.048161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.048168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.048459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.048467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.048802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.681 [2024-06-10 14:38:18.048810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.681 qpair failed and we were unable to recover it. 00:29:40.681 [2024-06-10 14:38:18.049147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.049154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.049449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.049457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.049627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.049635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.049950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.049957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.050271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.050277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.050588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.050596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.050921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.050927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.051240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.051247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.051552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.051561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.051933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.051940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.052243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.052251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.052553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.052561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.052855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.052862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.053171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.053178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.053365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.053373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.053682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.053689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.053908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.053915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.054222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.054228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.054587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.054595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.054916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.054923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.055231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.055238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.055562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.055578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.055791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.055799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.056080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.056088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.056385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.056393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.056711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.056717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.057007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.057014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.057208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.057215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.057500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.057507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.057811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.057817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.058008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.058016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.058204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.058213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.058537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.058544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.058908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.058917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.059219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.059227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.059531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.059539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.059835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.059845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.682 [2024-06-10 14:38:18.060230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.682 [2024-06-10 14:38:18.060238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.682 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.060531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.060540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.060832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.060841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.061149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.061157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.061387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.061395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.061702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.061710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.062015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.062327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.062335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.062637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.062645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.062937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.062944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.063232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.063241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.063560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.063569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.063878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.063885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.064205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.064213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.064538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.064545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.064839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.064846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.065045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.065052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.065375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.065382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.065702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.065710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.066003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.066010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.066328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.066335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.066719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.066727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.067016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.067023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.067334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.067634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.067641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.067957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.067966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.068275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.068282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.068571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.068577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.068895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.068902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.069216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.069223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.069421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.069427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.069745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.069753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.070075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.070083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.070275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.070282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.070632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.070639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.070975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.070983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.071289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.071297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.071579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.071586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.071897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.683 [2024-06-10 14:38:18.071904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.683 qpair failed and we were unable to recover it. 00:29:40.683 [2024-06-10 14:38:18.072196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.072203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.072491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.072499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.072676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.072683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.073009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.073016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.073302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.073311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.073628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.073637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.073946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.073954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.074261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.074268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.074459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.074467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.074779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.074786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.075100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.075107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.075298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.075306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.075520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.075530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.075795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.075803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.076137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.076145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.076475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.076482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.076640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.076647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.076899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.076906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.077220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.077227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.077547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.077554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.077871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.077879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.078189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.078196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.078507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.078514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.078844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.078851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.079192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.079199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.079527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.079534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.079826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.079836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.080107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.080115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.080409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.080416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.080737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.080745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.081057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.081065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.081376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.081383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.081759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.081766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.082091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.082098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.082318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.082325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.082646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.082653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.082944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.082951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.083284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.684 [2024-06-10 14:38:18.083293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.684 qpair failed and we were unable to recover it. 00:29:40.684 [2024-06-10 14:38:18.083628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.083635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.083950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.083958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.084248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.084256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.084451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.084458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.084745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.084752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.084967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.084974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.085179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.085186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.085404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.085411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.085748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.085755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.086067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.086074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.086369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.086375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.086664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.086672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.086986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.086993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.087301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.087308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.087624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.087633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.087948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.087955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.088284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.088291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.088620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.088628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.088931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.088939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.089261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.089269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.089582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.089591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.089892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.089900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.090262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.090269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.090603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.090610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.090922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.090929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.091232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.091238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.091517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.091525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.091849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.091856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.092144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.092156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.092461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.092469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.092765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.092772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.093095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.685 [2024-06-10 14:38:18.093102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.685 qpair failed and we were unable to recover it. 00:29:40.685 [2024-06-10 14:38:18.093398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.093406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.093717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.093724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.093917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.093924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.094260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.094267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.094571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.094578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.094788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.094795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.095156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.095164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.095459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.095466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.095778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.095786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.096178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.096185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.096467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.096474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.096760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.096767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.097051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.097057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.097377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.097384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.097707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.097715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.098032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.098039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.098319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.098326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.098648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.098655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.098956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.098964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.099252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.099258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.099561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.099569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.099905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.099913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.100227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.100235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.100547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.100554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.100725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.100733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.101106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.101114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.101415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.101423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.101717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.101724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.102023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.102030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.102252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.102259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.102570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.102577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.102891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.102898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.103217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.103224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.103512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.103520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.103827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.103834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.104215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.104223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.104535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.686 [2024-06-10 14:38:18.104542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.686 qpair failed and we were unable to recover it. 00:29:40.686 [2024-06-10 14:38:18.104748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.104755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.104969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.104976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.105262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.105269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.105575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.105582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.105937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.105944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.106260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.106268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.106568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.106576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.106863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.106870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.107194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.107202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.107508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.107515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.107805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.107813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.108134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.108141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.108427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.108435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.108719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.108725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.109042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.109049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.109341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.109348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.109672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.109679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.109981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.109989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.110308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.110319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.110630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.110637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.110959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.110966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.111296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.111303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.111597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.111605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.111869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.111877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.112196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.112204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.112522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.112532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.112859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.112867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.113189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.113196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.113499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.113506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.113842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.113850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.114156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.687 [2024-06-10 14:38:18.114164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.687 qpair failed and we were unable to recover it. 00:29:40.687 [2024-06-10 14:38:18.114486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.114494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.114861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.114869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.115175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.115183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.115464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.115473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.115789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.115795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.116093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.116100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.116407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.116414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.116721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.116727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.117047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.117054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.117365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.117372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.117694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.117702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.118016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.118024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.118378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.118385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.118729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.118736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.119033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.119040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.119382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.119389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.119755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.119762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.119920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.119927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.120254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.120261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.120578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.120585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.120899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.120905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.121215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.121221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.121539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.121545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.121806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.121813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.122013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.122020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.122242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.122250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.122626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.122633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.122843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.122850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.123160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.123168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.123492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.123499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.123803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.123809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.124118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.124126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.124436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.124443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.124648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.124655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.125007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.125015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.125411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.125418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.125716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.125723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.125987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.688 [2024-06-10 14:38:18.125995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.688 qpair failed and we were unable to recover it. 00:29:40.688 [2024-06-10 14:38:18.126181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.126188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.126425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.126432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.126747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.126754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.127097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.127105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.127432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.127439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.127766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.127774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.128117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.128439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.128447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.128743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.128750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.128954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.128961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.129293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.129300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.129509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.129517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.129842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.129849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.130251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.130260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.130523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.130530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.130835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.130841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.131166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.131173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.131486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.131493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.131812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.131818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.132139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.132146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.132305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.132313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.132641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.132649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.132856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.132862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.133174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.133182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.133481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.133488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.133807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.133813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.134029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.134037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.134349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.134356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.134651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.134658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.135030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.135037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.135317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.135325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.135618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.689 [2024-06-10 14:38:18.135624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.689 qpair failed and we were unable to recover it. 00:29:40.689 [2024-06-10 14:38:18.135924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.135931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.136093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.136100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.136377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.136384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.136718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.136725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.136879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.136887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.137221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.137228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.137553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.137560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.137879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.137886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.138218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.138226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.138537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.138545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.138828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.138835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.139051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.139058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.139361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.139368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.139722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.139729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.140036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.140043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.140321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.140329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.140690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.140697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.141007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.141014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.141333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.141340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.141702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.141708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.141884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.141890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.142170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.142179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.142258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.142264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.142541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.142548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.142739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.142746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.143099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.143105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.143293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.143300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.143626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.143632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.144037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.690 [2024-06-10 14:38:18.144044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.690 qpair failed and we were unable to recover it. 00:29:40.690 [2024-06-10 14:38:18.144349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.144356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.144664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.144671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.144982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.144989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.145327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.145334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.145643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.145650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.145955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.145962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.146273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.146279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.146574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.146583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.146895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.146902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.147213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.147219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.147545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.147551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.147760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.147767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.148081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.148088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.148409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.148416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.148782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.148789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.149106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.149115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.149424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.149431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.149633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.149639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.149919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.149925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.150254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.150261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.150573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.150581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.150876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.150883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.151197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.151203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.151486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.151493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.151812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.151818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.152108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.152115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.152331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.152337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.152658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.152665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.152983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.152990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.153300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.153307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.153635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.153642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.153922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.153928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.154243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.154250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.154544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.691 [2024-06-10 14:38:18.154552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.691 qpair failed and we were unable to recover it. 00:29:40.691 [2024-06-10 14:38:18.154833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.154840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.155137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.155145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.155447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.155455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.155769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.155776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.156084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.156091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.156384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.156392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.156721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.156728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.156896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.156903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.157213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.157221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.157554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.157561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.157857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.157863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.158193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.158199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.158490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.158497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.158880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.158886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.159171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.159179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.159487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.159494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.159817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.159824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.160152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.160158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.160379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.160386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.160673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.160680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.161011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.161019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.161310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.161325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.161631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.161638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.161859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.161865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.162166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.162172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.162464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.162471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.162817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.162823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.163114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.163121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.163281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.163288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.163484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.163492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.163773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.163780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.163996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.164004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.164322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.164330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.692 [2024-06-10 14:38:18.164649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.692 [2024-06-10 14:38:18.164656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.692 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.164958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.164964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.165170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.165177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.165460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.165467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.165769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.165776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.166070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.166077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.166384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.166391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.166705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.166711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.167045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.167052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.167368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.167375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.167598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.167606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.167933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.167940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.168254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.168261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.168577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.168584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.168886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.168893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.169197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.169203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.169500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.169507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.169821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.169830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.170143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.170151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.170460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.170466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.170762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.170769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.171078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.171084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.171393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.171400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.171700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.171706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.171916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.171922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.172251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.172258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.172475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.172482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.172696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.172702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.173012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.173020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.173309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.173326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.173484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.173491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.173802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.173808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.174135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.174142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.174450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.174458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.174782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.174788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.174990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.174996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.175201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.175207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.175402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.175410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.693 qpair failed and we were unable to recover it. 00:29:40.693 [2024-06-10 14:38:18.175588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.693 [2024-06-10 14:38:18.175595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.175872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.175879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.176185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.176193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.176496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.176503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.176804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.176811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.177139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.177145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.177463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.177470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.177815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.177821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.178177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.178185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.178484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.178491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.178789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.178796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.179104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.179111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.179421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.179428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.179798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.179805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.180121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.180128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.180416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.180423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.180695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.180703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.181016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.181023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.181334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.181340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.181592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.181599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.181906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.181912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.182220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.182227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.182531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.182538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.182853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.182861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.183174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.183182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.183486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.183493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.183782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.183788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.184099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.184105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.694 [2024-06-10 14:38:18.184378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.694 [2024-06-10 14:38:18.184385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.694 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.184713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.184720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.185016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.185025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.185309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.185321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.185643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.185650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.186021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.186029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.186322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.186329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.186633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.186641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.186950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.186957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.187234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.187242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.187557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.187564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.187723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.187730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.188101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.188109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.188422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.188429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.188722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.188729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.189011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.189019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.189332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.189340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.189702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.189710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.190009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.190016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.190326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.190332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.190522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.190529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.190727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.190734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.191078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.191084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.191393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.191401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.191718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.191725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.191912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.191920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.192346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.192353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.192686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.192693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.193007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.193013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.193330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.193339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.193632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.695 [2024-06-10 14:38:18.193639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.695 qpair failed and we were unable to recover it. 00:29:40.695 [2024-06-10 14:38:18.193945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.193952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.194260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.194266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.194575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.194581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.194887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.195279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.195285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.195612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.195619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.195940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.195947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.196241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.196247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.196565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.196571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.196858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.196866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.197175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.197181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.197459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.197465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.197825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.197832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.198141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.198149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.198412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.198420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.198809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.198816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.199125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.199132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.199454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.199461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.199667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.199674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.199992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.199998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.200304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.200310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.200671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.200678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.200878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.200885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.201224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.201230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.201559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.201565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.201847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.201853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.202157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.202163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.202487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.202495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.202798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.202805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.202994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.203002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.203205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.203211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.203444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.203451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.203775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.203781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.204073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.204080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.204386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.204392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.204687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.204694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.204927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.696 [2024-06-10 14:38:18.204933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.696 qpair failed and we were unable to recover it. 00:29:40.696 [2024-06-10 14:38:18.205133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.205140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.205471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.205482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.205759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.205765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.206033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.206040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.206219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.206231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.206555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.206562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.206849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.206856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.207217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.207224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.207401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.207408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.207511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.207518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.207824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.207831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.208143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.208150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.208466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.208473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.208698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.208704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.208861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.208868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.209171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.209178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.209399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.209406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.209598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.209604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.209907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.209914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.210115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.210121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.210335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.210342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.210420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.210427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.210738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.210745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.210962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.210968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.211257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.211264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.211577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.211584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.211878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.211885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.212203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.212209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.212419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.212426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.212798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.212804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.213142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.213148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.213339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.213345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.213562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.213568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.213879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.213885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.214182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.214189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.214435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.214441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.214754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.214760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.215090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.215097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.697 [2024-06-10 14:38:18.215280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.697 [2024-06-10 14:38:18.215288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.697 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.215578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.215585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.215895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.215902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.216227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.216235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.216523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.216530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.216627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.216633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.216863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.216870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.217190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.217197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.217512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.217518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.217876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.217882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.218202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.218208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.218511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.218517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.218818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.218824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.219163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.219169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.219470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.219477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.219779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.219786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.220079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.220086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.220374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.220381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.220694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.220701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.221013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.221020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.221303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.221310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.221540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.221547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.221755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.221763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.222093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.222101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.222415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.222422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.222754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.222761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.223081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.223087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.223287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.223293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.223533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.223540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.223850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.223856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.224139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.224146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.224477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.224484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.224841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.224847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.225175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.225181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.225375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.225382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.225710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.225717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.226082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.226089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.226254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.226262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.226589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.698 [2024-06-10 14:38:18.226596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.698 qpair failed and we were unable to recover it. 00:29:40.698 [2024-06-10 14:38:18.226884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.226891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.227218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.227225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.227615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.227621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.227829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.227836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.228013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.228022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.228174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.228181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.228375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.228383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.228573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.228580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.228950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.228956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.229051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.229057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.229401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.229408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.229714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.229722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.230011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.230017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.230324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.230331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.230666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.230672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.230969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.230975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.231360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.231367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.231662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.231668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.231982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.231988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.232285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.232292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.232613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.232620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.232931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.232938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.233135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.233142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.233299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.233306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.233661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.233668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.233993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.233999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.234301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.234307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.234443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.234451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.234673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.234679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.234878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.234884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.235215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.235222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.235553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.235560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.235874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.235880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.236184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.236191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.236499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.236506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.236868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.236875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.237197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.237204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.237519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.699 [2024-06-10 14:38:18.237527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.699 qpair failed and we were unable to recover it. 00:29:40.699 [2024-06-10 14:38:18.237710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.237718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.237911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.237918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.238095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.238102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.238394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.238401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.238771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.238778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.239085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.239092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.239393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.239403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.239720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.239728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.240038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.240045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.240343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.240352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.240668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.240675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.240896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.240902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.241206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.241212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.241585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.241591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.241978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.241985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.242280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.242287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.242614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.242621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.242910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.242917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.243219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.243225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.243547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.243554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.243878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.243885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.244179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.244187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.244492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.244499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.244767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.244774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.244952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.244959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.245258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.245265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.245590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.245597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.245912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.245919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.246114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.246121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.246488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.246495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.246794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.246809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.247137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.247143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.700 qpair failed and we were unable to recover it. 00:29:40.700 [2024-06-10 14:38:18.247456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.700 [2024-06-10 14:38:18.247463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.247628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.247635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.247855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.247861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.248077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.248084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.248282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.248289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.248612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.248619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.248990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.248996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.249282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.249288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.249670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.249677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.249971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.249979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.250152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.250159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.250379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.250386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.250676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.250683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.251012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.251018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.251308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.251322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.251544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.251551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.251868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.251875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.252083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.252090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.252280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.252286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.252609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.252616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.252904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.252911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.253235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.253242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.253574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.253582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.253889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.253896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.254211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.254218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.254528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.254535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.254844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.254851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.255162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.255169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.255325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.255332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.701 [2024-06-10 14:38:18.255691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.701 [2024-06-10 14:38:18.255698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.701 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.255899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.255908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.256353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.256360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.256583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.256590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.256904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.256911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.257104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.257111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.257445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.257452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.257777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.257784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.258114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.258121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.258310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.258322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.258518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.258525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.258751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.258759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.259075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.259082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.259347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.976 [2024-06-10 14:38:18.259354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.976 qpair failed and we were unable to recover it. 00:29:40.976 [2024-06-10 14:38:18.259659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.259666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.259981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.259987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.260057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.260064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.260408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.260415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.260760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.260767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.261073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.261081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.261348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.261355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.261737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.261744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.261948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.261955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.262172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.262178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.262486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.262493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.262804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.262815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.263123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.263129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.263348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.263355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.263671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.263677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.263975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.263982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.264275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.264290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.264619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.264626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.264827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.264833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.265175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.265181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.265531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.265537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.265859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.265866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.266239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.266245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.266451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.266458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.266659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.266666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.266865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.266872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.267190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.267196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.267496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.267502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.267822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.267828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.268042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.268048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 [2024-06-10 14:38:18.268218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.977 [2024-06-10 14:38:18.268224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.977 qpair failed and we were unable to recover it. 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Read completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.977 Write completed with error (sct=0, sc=8) 00:29:40.977 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Write completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Write completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Read completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 Write completed with error (sct=0, sc=8) 00:29:40.978 starting I/O failed 00:29:40.978 [2024-06-10 14:38:18.268946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.978 [2024-06-10 14:38:18.269598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.269703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.270196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.270231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.270604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.270694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.270910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.270918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.271109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.271123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.271498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.271505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.271829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.271836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.272168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.272174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.272554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.272560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.272812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.272819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.273077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.273083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.273429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.273436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.273663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.273669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.273977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.273983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.274412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.274418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.274740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.274746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.275000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.275006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.275303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.275328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.275501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.275508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.275852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.275859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.276081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.276088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.276232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.276238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.276476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.276483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.276838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.276844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.277171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.277178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.277502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.277509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.277867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.277875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.278191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.278198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.278517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.278523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.278730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.278737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.279019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.279026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.279338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.279344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.279661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.279667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.279832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.279838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.978 [2024-06-10 14:38:18.280226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.978 [2024-06-10 14:38:18.280232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.978 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.280571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.280578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.280903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.280909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.281211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.281218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.281398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.281405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.281642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.281649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.281894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.281902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.282089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.282096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.282400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.282407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.282593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.282600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.282917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.282923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.283097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.283104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.283423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.283430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.283756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.283762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.284107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.284113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.284428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.284435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.284750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.284757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.284957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.284963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.285345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.285352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.285661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.285668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.286004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.286010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.286309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.286319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.286641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.286648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.286841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.286848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.287152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.287159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.287512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.287518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.287817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.287824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.287934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.287941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.288255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.288261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.288620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.288626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.288975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.288982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.289095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.289102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.289372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.289379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.289668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.289674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.289996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.290002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.290329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.290336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.290652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.290659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.979 qpair failed and we were unable to recover it. 00:29:40.979 [2024-06-10 14:38:18.290951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.979 [2024-06-10 14:38:18.290958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.291278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.291285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.291581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.291588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.291756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.291764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.292066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.292073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.292461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.292468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.292746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.292753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.293063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.293069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.293360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.293367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.293687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.293695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.293985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.293992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.294193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.294199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.294490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.294497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.294717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.294724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.295111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.295117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.295439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.295446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.295807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.295813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.296090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.296096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.296346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.296353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.296671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.296677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.296887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.296894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.297345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.297352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.297595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.297601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.297874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.297881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.298087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.298094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.298438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.298445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.298760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.298767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.299080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.299087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.299246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.299253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.299620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.299627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.299932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.299939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.300265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.300271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.300499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.300506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.300777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.300785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.300997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.301004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.301350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.301357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.301741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.301748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.302077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.302084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.980 [2024-06-10 14:38:18.302427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.980 [2024-06-10 14:38:18.302433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.980 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.302722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.302729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.303047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.303053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.303408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.303416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.303686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.303917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.303923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.304157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.304163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.304401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.304408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.304764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.304770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.305057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.305064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.305386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.305392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.305738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.305747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.305937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.305944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.306269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.306275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.306625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.306631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.306905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.306912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.307273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.307279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.307456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.307464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.307806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.307812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.308026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.308033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.308394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.308401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.308691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.308698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.308898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.308905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.309085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.309091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.309396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.309402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.309634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.309641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.309859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.309865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.310131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.310138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.310452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.310459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.310668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.310675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.311025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.981 [2024-06-10 14:38:18.311032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.981 qpair failed and we were unable to recover it. 00:29:40.981 [2024-06-10 14:38:18.311358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.311365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.311678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.311685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.311993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.312001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.312191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.312197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.312363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.312370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.312674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.312680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.312877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.312883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.313212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.313219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.313538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.313545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.313852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.313859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.314034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.314040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.314467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.314474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.314641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.314648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.314974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.314980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.315186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.315192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.315519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.315526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.315577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.315584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.315891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.315897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.316210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.316217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.316465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.316471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.316781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.316790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.317108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.317114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.317415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.317422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.317723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.317730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.318064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.318071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.318267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.318274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.318514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.318521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.318816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.318824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.319103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.319110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.319443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.319449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.319670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.319677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.319847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.319854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.320146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.320153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.320418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.320425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.320828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.320835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.321142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.321150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.321469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.321476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.321769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.321775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.982 qpair failed and we were unable to recover it. 00:29:40.982 [2024-06-10 14:38:18.322092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.982 [2024-06-10 14:38:18.322098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.322384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.322391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.322709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.322716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.322904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.322912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.323266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.323272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.323566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.323573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.323895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.323902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.324230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.324236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.324526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.324533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.324848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.324856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.325165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.325172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.325345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.325353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.325653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.325661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.325966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.325973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.326283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.326289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.326588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.326596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.326906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.326913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.327064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.327071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.327421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.327429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.327728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.327735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.328092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.328098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.328309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.328319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.328619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.328627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.328961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.328968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.329320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.329326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.329613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.329620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.329922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.329929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.330223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.330230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.330543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.330550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.330860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.330867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.331175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.331182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.331498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.331505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.331664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.331671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.332025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.332032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.332363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.332370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.332707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.332713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.333025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.333032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.983 [2024-06-10 14:38:18.333327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.983 [2024-06-10 14:38:18.333333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.983 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.333688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.333695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.333879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.333885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.334242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.334248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.334640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.334647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.334955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.334962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.335312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.335321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.335641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.335647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.335859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.335866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.336163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.336171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.336599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.336605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.336906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.336913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.337238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.337245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.337588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.337596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.337891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.337898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.338204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.338212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.338534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.338541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.338850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.338858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.339188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.339194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.339486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.339493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.339856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.339864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.340145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.340159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.340433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.340440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.340758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.340764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.341074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.341080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.341269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.341277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.341593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.341599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.341923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.341930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.342255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.342261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.342479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.342486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.342804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.984 [2024-06-10 14:38:18.342811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.984 qpair failed and we were unable to recover it. 00:29:40.984 [2024-06-10 14:38:18.343015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.343021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.343283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.343289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.343622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.343628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.343843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.343850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.344044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.344051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.344266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.344274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.344593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.344601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.344755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.344762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.345099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.345105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.345313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.345324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.345627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.345633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.345702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.345708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.345992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.345998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.346266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.346274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.346588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.346595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.346908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.346916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.347247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.347253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.347603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.347609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.347921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.347927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.348019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.348025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.348288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.348295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.348616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.348623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.348919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.348926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.349229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.349235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.349513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.349521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.349848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.349855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.350163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.350171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.350504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.350512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.350821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.350828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.351119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.351126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.351455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.351462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.351630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.351637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.351910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.351917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.352252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.352258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.352544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.352554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.352876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.352882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.353189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.353196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.985 [2024-06-10 14:38:18.353486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.985 [2024-06-10 14:38:18.353492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.985 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.353813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.353820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.354130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.354137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.354445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.354452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.354741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.354748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.355051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.355058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.355374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.355381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.355683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.355689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.355980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.355987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.356290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.356297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.356509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.356517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.356860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.356867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.357162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.357169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.357483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.357490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.357812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.357818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.358129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.358135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.358423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.358429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.358751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.358758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.358969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.358976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.359291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.359297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.359590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.359597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.359888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.359895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.360102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.360108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.360440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.360447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.360778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.360785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.986 [2024-06-10 14:38:18.360972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.986 [2024-06-10 14:38:18.360978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.986 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.361303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.361310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.361619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.361626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.361837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.361843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.362190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.362196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.362581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.362587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.362885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.362892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.363206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.363213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.363488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.363495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.363854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.363860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.364190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.364196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.364509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.364516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.364829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.364837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.365000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.365008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.365293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.365300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.365717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.365724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.366042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.366048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.366381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.366388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.366702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.366708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.367005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.367012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.367351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.367358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.367645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.367652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.367960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.367966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.368263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.368276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.368585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.368591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.368797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.368803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.368967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.368974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.369268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.369276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.369482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.369489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.369753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.369760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.370106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.370113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.370329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.370336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.370729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.370736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.371085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.371091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.371383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.371391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.371698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.371705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.372030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.987 [2024-06-10 14:38:18.372037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.987 qpair failed and we were unable to recover it. 00:29:40.987 [2024-06-10 14:38:18.372345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.372352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.372607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.372614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.372911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.372924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.373232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.373239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.373457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.373464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.373772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.373778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.374109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.374115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.374396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.374403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.374713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.374719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.374993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.375000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.375295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.375301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.375600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.375607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.375914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.375920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.376197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.376204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.376392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.376399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.376772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.376779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.377058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.377065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.377226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.377233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.377527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.377534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.377837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.377843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.378131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.378139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.378449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.378455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.378752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.378759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.379070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.379076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.379369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.379376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.379704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.379712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.380004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.380011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.380320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.380326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.380634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.380640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.380941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.380948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.381244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.381251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.381563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.988 [2024-06-10 14:38:18.381571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.988 qpair failed and we were unable to recover it. 00:29:40.988 [2024-06-10 14:38:18.381850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.381857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.382058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.382064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.382404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.382411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.382748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.382755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.383062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.383068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.383369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.383375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.383730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.383737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.384028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.384035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.384330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.384337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.384607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.384614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.384920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.384929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.385230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.385237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.385452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.385458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.385758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.385764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.386058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.386064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.386404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.386412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.386719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.386725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.387023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.387029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.387321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.387327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.387527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.387533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.387835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.387841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.388137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.388143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.388458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.388466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.388777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.388783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.388989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.388996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.389310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.389320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.389526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.389533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.389881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.389887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.390083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.390091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.390434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.390441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.390796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.390803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.391109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.391116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.391431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.391438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.391648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.391655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.392012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.989 [2024-06-10 14:38:18.392019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.989 qpair failed and we were unable to recover it. 00:29:40.989 [2024-06-10 14:38:18.392335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.392342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.392628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.392634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.392945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.392951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.393156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.393162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.393514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.393521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.393805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.393812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.394119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.394126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.394483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.394491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.394796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.394802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.395112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.395119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.395322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.395328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.395610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.395616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.395939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.395945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.396263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.396269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.396492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.396498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.396823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.396832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.397166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.397173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.397464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.397471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.397811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.397818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.398023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.398030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.398234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.398241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.398564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.398571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.398887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.398894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.399229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.399236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.399541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.399549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.399860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.399867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.400173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.400181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.400462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.400469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.400811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.400817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.401022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.401029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.401300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.401308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.401623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.401630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.401819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.401825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.990 [2024-06-10 14:38:18.402151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.990 [2024-06-10 14:38:18.402158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.990 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.402377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.402383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.402721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.402727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.403043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.403051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.403190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.403197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.403521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.403528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.403833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.403839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.404119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.404125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.404434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.404441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.404768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.404774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.405071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.405079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.405390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.405397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.405567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.405574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.405917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.405923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.406230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.406236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.406550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.406557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.406752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.406758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.407071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.407078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.407298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.407304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.407477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.407484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.407765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.407772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.408062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.408068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.408223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.408233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.408536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.408543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.408853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.408861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.409166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.991 [2024-06-10 14:38:18.409172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.991 qpair failed and we were unable to recover it. 00:29:40.991 [2024-06-10 14:38:18.409462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.409468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.409772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.409779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.410065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.410072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.410389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.410396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.410687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.410694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.410966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.410972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.411296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.411304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.411612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.411618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.412004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.412010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.412324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.412331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.412629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.412636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.412853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.412860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.413168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.413176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.413489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.413496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.413820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.413826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.414118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.414126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.414444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.414450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.414751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.414758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.415064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.415071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.415447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.415454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.415770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.415776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.416082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.416089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.416398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.416405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.416716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.416722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.417084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.417091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.417410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.417417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.417786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.417792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.418158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.418164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.418377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.418384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.418715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.418722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.419072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.419078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.419115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.419121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.419429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.419436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.419745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.419752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.420065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.420071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.420275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.420281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.420600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.992 [2024-06-10 14:38:18.420608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.992 qpair failed and we were unable to recover it. 00:29:40.992 [2024-06-10 14:38:18.420906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.420912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.421274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.421280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.421595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.421603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.421840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.421848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.422200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.422208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.422545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.422553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.422984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.422992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.423301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.423308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.423626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.423633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.423945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.423951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.424248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.424261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.424660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.424667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.424979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.424986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.425204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.425210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.425541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.425548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.425838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.425845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.426143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.426150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.426471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.426478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.426786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.426793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.427175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.427181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.427547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.427554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.427869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.427875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.428184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.428190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.428487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.428495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.428798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.428804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.429110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.429119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.429432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.429439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.429757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.429764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.430068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.430075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.430388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.430396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.430706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.430714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.431028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.431034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.431334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.431341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.431547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.431554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.431878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.431884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.432199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.432205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.432495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.432502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.993 [2024-06-10 14:38:18.432805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.993 [2024-06-10 14:38:18.432812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.993 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.433133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.433140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.433447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.433457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.433763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.433769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.434080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.434086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.434287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.434293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.434625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.434632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.434936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.434943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.435255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.435261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.435580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.435587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.435895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.435901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.436264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.436270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.436561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.436569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.436879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.436886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.437202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.437209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.437512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.437520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.437840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.437847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.438154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.438161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.438468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.438476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.438781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.438787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.439104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.439111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.439426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.439434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.439752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.439758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.440118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.440125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.440434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.440442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.440774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.440780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.441084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.441091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.441365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.441372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.441752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.441758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.441957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.441964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.442287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.442294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.442440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.442447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.442827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.442833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.443141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.443148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.443457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.443464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.443825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.443833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.444155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.444162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.444458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.444465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.444779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.994 [2024-06-10 14:38:18.444786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.994 qpair failed and we were unable to recover it. 00:29:40.994 [2024-06-10 14:38:18.445083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.445090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.445396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.445403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.445762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.445769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.446084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.446094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.446430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.446437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.446727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.446733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.447019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.447025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.447326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.447333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.447666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.447673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.448045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.448052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.448345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.448352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.448532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.448539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.448837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.448844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.449132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.449139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.449458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.449465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.449681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.449688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.450030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.450036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.450346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.450353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.450645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.450652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.450961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.450968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.451142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.451149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.451432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.451439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.451770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.451777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.452098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.452105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.452458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.452465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.452777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.452784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.453118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.453125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.453426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.453434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.453654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.453661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.454014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.454021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.454326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.454334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.454636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.454643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.454931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.454937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.455230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.455236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.455534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.455541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.455862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.455868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.456156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.995 [2024-06-10 14:38:18.456163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.995 qpair failed and we were unable to recover it. 00:29:40.995 [2024-06-10 14:38:18.456478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.456485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.456785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.456792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.457091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.457097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.457384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.457391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.457710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.457717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.458028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.458036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.458344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.458353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.458652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.458658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.458975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.458982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.459335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.459342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.459629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.459635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.459980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.459987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.460777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.460794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.461072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.461080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.461284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.461291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.461603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.461610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.461922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.461928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.462257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.462263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.462546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.462553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.462861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.462868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.463169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.463175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.463513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.463521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.463838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.463845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.464130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.464138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.464335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.464341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.464621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.464628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.464888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.464895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.465056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.465064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.465360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.465368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.465560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.465567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.465898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.465904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.996 [2024-06-10 14:38:18.466095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.996 [2024-06-10 14:38:18.466101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.996 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.466414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.466421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.466772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.466779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.466946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.466954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.467175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.467181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.467459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.467467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.467675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.467681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.468021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.468027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.468321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.468329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.468643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.468650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.468939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.468946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.469267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.469275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.469564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.469571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.470465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.470482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.470692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.470700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.471030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.471039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.471374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.471382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.471693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.471700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.472007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.472014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.472344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.472352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.472553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.472559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.472880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.472887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.473078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.473085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.473414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.473421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.473606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.473614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.473876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.473883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.474230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.474237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.474413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.474420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.474724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.474731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.475044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.475052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.475372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.475923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.475938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.476233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.476241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.476567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.476574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.476777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.476784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.476961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.476967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.477164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.477172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.477486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.477494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.477691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.997 [2024-06-10 14:38:18.477698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.997 qpair failed and we were unable to recover it. 00:29:40.997 [2024-06-10 14:38:18.477889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.477896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.478190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.478196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.478541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.478548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.479245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.479262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.479597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.479605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.479903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.479910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.480220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.480227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.480536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.480543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.480731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.480739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.480958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.480965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.481263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.481270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.481631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.481638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.481929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.481936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.482143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.482150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.482473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.482480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.482785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.482793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.482986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.483001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.483310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.483324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.483731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.483738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.484058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.484065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.484271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.484277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.484623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.484630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.485070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.485080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.485256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.485264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.485573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.485581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.485903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.485910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.486130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.486137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.486306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.486317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.486594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.486601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.486753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.486759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.487021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.487027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.487313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.487323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.487631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.487639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.487976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.487982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.488296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.488303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.488609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.488616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.488915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.488921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.489111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.489118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.998 qpair failed and we were unable to recover it. 00:29:40.998 [2024-06-10 14:38:18.489344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.998 [2024-06-10 14:38:18.489351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.489656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.489663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.489947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.489954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.490284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.490291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.490903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.490919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.491204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.491220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.491882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.491895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.492180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.492187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.492409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.492416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.492688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.492694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.493063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.493069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.493366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.493373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.493744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.493751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.494052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.494058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.494363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.494370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.494742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.494750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.495041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.495047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.495419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.495426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.495636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.495645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Read completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 Write completed with error (sct=0, sc=8) 00:29:40.999 starting I/O failed 00:29:40.999 [2024-06-10 14:38:18.496353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.999 [2024-06-10 14:38:18.496787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.496827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.497154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.497184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.497370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.497378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.497579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.497585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.497888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.497894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.498109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.498116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.498416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.498423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.498732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.498739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:40.999 [2024-06-10 14:38:18.499054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.999 [2024-06-10 14:38:18.499060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:40.999 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.499375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.499382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.499699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.499706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.499994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.500001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.500318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.500326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.500642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.500649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.500862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.500869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.501134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.501141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.501452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.501459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.501679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.501685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.502015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.502021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.502237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.502246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.502529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.502536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.502833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.502841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.503018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.503025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.503310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.503320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.503632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.503638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.503956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.503962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.504272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.504279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.504650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.504658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.504844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.504850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.505131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.505137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.505313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.505324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.505715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.505721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.506042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.506048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.506349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.506356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.506629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.506637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.506943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.506950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.507242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.507249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.507613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.507621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.507946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.507952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.508277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.508284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.508623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.508629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.000 qpair failed and we were unable to recover it. 00:29:41.000 [2024-06-10 14:38:18.508931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.000 [2024-06-10 14:38:18.508938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.509250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.509256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.509618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.509625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.509942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.509949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.510260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.510266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.510494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.510501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.510815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.510822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.511135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.511142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.511463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.511470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.511769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.511776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.512108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.512114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.512393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.512400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.512733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.512740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.513043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.513050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.513277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.513284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.513537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.513544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.513843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.513851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.514156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.514163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.514335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.514344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.514760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.514766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.515053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.515061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.515370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.515377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.515559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.515566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.515914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.515921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.516182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.516188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.516496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.516503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.516849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.516856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.517162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.517169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.517485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.517525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.517726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.517733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.518072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.518079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.518393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.518400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.518711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.518717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.519005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.519012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.519285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.519292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.519602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.519608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.519872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.519878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.520189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.520196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.001 [2024-06-10 14:38:18.520575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-06-10 14:38:18.520582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.520863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.520870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.521178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.521184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.521494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.521500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.521670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.521677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.521966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.521973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.522293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.522301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.522517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.522524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.522812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.522818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.523143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.523149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.523478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.523485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.523783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.523789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.524167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.524175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.524352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.524358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.524649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.524656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.524813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.524820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.525096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.525102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.525258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.525265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.525580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.525586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.525874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.525881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.526173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.526183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.526445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.526452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.526767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.526774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.527087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.527093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.527384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.527391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.527681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.527687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.527995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.528002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.528313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.528322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.528610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.528617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.528938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.528945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.529277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.529283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.529496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.529503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.529957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.529963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.530257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.530265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.530612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.530619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.530914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.530920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.531271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.531278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.531650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.531658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.531957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-06-10 14:38:18.531964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-06-10 14:38:18.532278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.532285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.532613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.532620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.532928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.532935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.533112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.533120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.533442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.533449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.533742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.533749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.533909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.533917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.534233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.534240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.534575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.534582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.534872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.534879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.535190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.535198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.535495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.535502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.535788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.535795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.536089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.536096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.536408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.536414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.536715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.536721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.537031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.537037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.537348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.537356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.537741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.537748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.538045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.538052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.538340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.538347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.538625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.538632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.538951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.538957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.539311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.539324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.539602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.539610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.539679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.539686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.539937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.539944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.540256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.540263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.540592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.540599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.540915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.540922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.541245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.541252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.541601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.541609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.541884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.541891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.542180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.542188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.542498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.542504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.542814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.542821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.543131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.543138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.543428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.543435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-06-10 14:38:18.543653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-06-10 14:38:18.543660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.543855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.543862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.544139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.544146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.544309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.544324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.544640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.544646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.544847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.544853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.545187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.545194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.545491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.545498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.545802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.545810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.546124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.546131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.546438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.546445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.546756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.546763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.547087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.547093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.547384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.547391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.547719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.547725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.548050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.548057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.548386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.548393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.548692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.548699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.548858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.548865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.549145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.549153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.549492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.549499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.549811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.549819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.550122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.550128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.550372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.550382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.550711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.550718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.551115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.551121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.551437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.551443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.551782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.551788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.552087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.552094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.552288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.552295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.552595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.552602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.552925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.552932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.553261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.553268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.553619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.553626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.553933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.553940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.554097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.554105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.554398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.554405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.554699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.554707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-06-10 14:38:18.554985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-06-10 14:38:18.554991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.005 [2024-06-10 14:38:18.555394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-06-10 14:38:18.555401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-06-10 14:38:18.555705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-06-10 14:38:18.555711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-06-10 14:38:18.555861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-06-10 14:38:18.555868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-06-10 14:38:18.556145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-06-10 14:38:18.556152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.556484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.556492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.556826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.556835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.557033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.557039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.557377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.557384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.557691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.557698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.558063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.558069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.558385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.558392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.558701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.558709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.558985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.558992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.559328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.559337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.559631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.559639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.559946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.559953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.560246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.560253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.560562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.560570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.560881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.560888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.561195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.561201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.561395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.561402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.561769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.561777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.561962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.561969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.562290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.562297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.562604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.562612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.562780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.562788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.563118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.563126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.563308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.563319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.563605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.563611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.563903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.563910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.564213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.564219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.564531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.564538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.564716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.564723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.565054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.565061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.565381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.565388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.565704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.565710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.566022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.566028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.566319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.566327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.566622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.566629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.566817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.566824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.567151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.567158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.567496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.567504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.567813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.567819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.568093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.568099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.568469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.568475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.568779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.568787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.569095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.569101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.569383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.569390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.569668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.569674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.569981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.569989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.570298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.570304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.570585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.570592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.570795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.570801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.571084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.571091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.571407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.571415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.571736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.571743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.571932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.571940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.572233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.572240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.572554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.572560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.278 [2024-06-10 14:38:18.572847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.278 [2024-06-10 14:38:18.572854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.278 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.573045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.573052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.573324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.573332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.573611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.573618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.573828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.573834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.574158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.574165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.574345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.574641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.574647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.574975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.574982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.575288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.575296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.575588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.575596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.575903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.575909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.576221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.576227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.576559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.576566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.576901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.576908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.577197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.577204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.577508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.577516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.577827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.577833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.578132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.578139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.578476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.578483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.578801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.578808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.579121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.579127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.579418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.579425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.579739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.579746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.580055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.580061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.580375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.580381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.580678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.580686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.580865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.580872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.581143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.581150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.581341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.581349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.581642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.581649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.581957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.581964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.582271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.582278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.582589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.582597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.582927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.582933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.583248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.583255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.583579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.583586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.583910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.583918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.584226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.584233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.584437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.584444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.584754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.584761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.585069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.585076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.585382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.585389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.585592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.585600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.585918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.585924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.586211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.586219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.586413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.586420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.586739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.586746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.587054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.587061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.587455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.587462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.587740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.587747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.588070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.588076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.588361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.588369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.588664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.588672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.588833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.588841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.589138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.589146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.589496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.589503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.589797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.589805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.590117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.590123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.590443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.590451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.590767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.591081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.591087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.591414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.591421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.591711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.591718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.592026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.592032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.592344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.592352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.592655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.592662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.592975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.592982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.593304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.593311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.593659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.593666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.593971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.593978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.594261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.594268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.594570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.594578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.279 [2024-06-10 14:38:18.594884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.279 [2024-06-10 14:38:18.594890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.279 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.595177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.595183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.595492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.595499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.595750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.595756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.596069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.596076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.596235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.596242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.596607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.596615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.596922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.596928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.597249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.597256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.597564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.597571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.597897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.597904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.598246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.598253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.598561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.598568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.598757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.598766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.599083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.599091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.599397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.599404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.599743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.599749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.599955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.599961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.600177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.600184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.600558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.600565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.600774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.600780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.601136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.601143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.601450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.601457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.601774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.601780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.602062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.602069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.602404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.602410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.602719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.602727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.602902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.602909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.603235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.603242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.603542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.603548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.603847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.603854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.604165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.604171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.604502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.604509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.604666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.604673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.604995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.605002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.605214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.605221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.605521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.605528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.605566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.605573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.605856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.605863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.606184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.606191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.606492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.606499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.606795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.606802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.607102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.607109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.607426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.607434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.607744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.607751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.608058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.608065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.608359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.608366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.608677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.608684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.608994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.609002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.609307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.609317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.609626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.609633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.609940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.609946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.610253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.610259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.610563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.610570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.610856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.610864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.611179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.611186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.611488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.611496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.611818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.611825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.612131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.612138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.612433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.612441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.612750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.612756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.613060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-06-10 14:38:18.613067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-06-10 14:38:18.613171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.613177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.613328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.613336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.613631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.613638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.613967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.613974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.614263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.614269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.614492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.614499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.614703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.614710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.615063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.615070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.615366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.615374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.615656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.615664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.615996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.616003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.616223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.616230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.616553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.616560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.616748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.616756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.617069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.617075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.617231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.617239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.617515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.617522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.617844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.617852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.618137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.618145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.618452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.618459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.618772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.618779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.619092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.619098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.619407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.619414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.619587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.619594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.619848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.619855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.620189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.620196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.620487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.620494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.620824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.620830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.621180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.621187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.621492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.621499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.621792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.621800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.622151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.622158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.622458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.622465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.622772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.622778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.623096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.623103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.623412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.623419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.623724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.623731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.624057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.624063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.624366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.624373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.624709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.624715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.625014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.625020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.625330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.625337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.625636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.625644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.625968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.625976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.626312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.626323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.626594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.626601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.626918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.626925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.627233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.627241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.627550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.627557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.627866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.627874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.628070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.628077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.628385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.628392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.628749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.628756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.629057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.629064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.629405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.629412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.629731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.629739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.630026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.630033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.630340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.630349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.630695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.630702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.630989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.630996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.631285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.631291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.631604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.631611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.631920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.631926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.632135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.632141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.632465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.632472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.632814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.632821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.633130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.633137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.633362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.633369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.633741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.633747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.633950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.633957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.634241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-06-10 14:38:18.634249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-06-10 14:38:18.634576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.634583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.634743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.634750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.634966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.634974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.635239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.635247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.635555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.635563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.635751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.635759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.636022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.636029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.636338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.636347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.636659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.636665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.636954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.636961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.637250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.637257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.637564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.637571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.637879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.637886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.638183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.638190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.638577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.638584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.638869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.638875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.639065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.639072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.639365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.639372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.639670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.639677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.639993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.639999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.640308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.640317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.640492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.640499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.640877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.640885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.641101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.641107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.641288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.641296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.641678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.641685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.641996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.642005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.642320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.642326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.642629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.642636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.642930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.642936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.643255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.643262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.643555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.643562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.643870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.643877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.644208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.644215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.644543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.644550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.644821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.644828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.645152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.645159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.645467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.645475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.645789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.645796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.646105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.646112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.646420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.646427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.646598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.646605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.646881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.646887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.647196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.647204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.647401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.647407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.647732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.647739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.648046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.648053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.648412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.648419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.648748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.648754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.649068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.649075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.649387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.649394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.649779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.649786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.650107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.650114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.650408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.650415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.650720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.650727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.651034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.651040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.651359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.651366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.651658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.651665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.651870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.651876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.652226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.652233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.652532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.652539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.652865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.652873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.653146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.653154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.653477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.653483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.653771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.653778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.654065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.654071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.654382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.654390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.654710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.654716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.655033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.655040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.655221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.655227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-06-10 14:38:18.655509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-06-10 14:38:18.655516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.655844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.655851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.656170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.656177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.656423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.656430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.656787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.656795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.657183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.657189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.657399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.657406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.657714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.657722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.658037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.658044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.658368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.658375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.658692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.658699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.658996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.659002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.659331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.659338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.659662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.659669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.659976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.659983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.660271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.660278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.660615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.660622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.660935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.660942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.661243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.661250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.661562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.661570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.661725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.661733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.662013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.662020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.662336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.662344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.662671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.662678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.662984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.662991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.663311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.663320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.663636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.663643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.663933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.663939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.664230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.664237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.664421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.664428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.664751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.664757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.664948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.664954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.665286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.665293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.665605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.665612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.665921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.665927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.666117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.666124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.666447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.666455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.666750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.666757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.667098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.667104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.667398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.667405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.667714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.667720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.668035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.668042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.668350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.668358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.668545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.668552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.668860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.668866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.669202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.669209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.669538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.669545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.669834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.669841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.670158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.670166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.670435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.670442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.670764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.670771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.671058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.671064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.671382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.671389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.671715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.671721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.672000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.672007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.672296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.672302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.672587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.672594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.672916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.672922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.673215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.673222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.673506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.673513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.673817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.673824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-06-10 14:38:18.674024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-06-10 14:38:18.674031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.674339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.674347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.674657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.674664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.674975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.674982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.675295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.675302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.675587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.675602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.675927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.675934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.676262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.676268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.676558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.676565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.676886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.676893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.677183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.677191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.677494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.677501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.677726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.677732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.678035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.678041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.678331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.678338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.678714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.678723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.679003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.679010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.679284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.679292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.679471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.679479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.679780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.679786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.680114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.680120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.680434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.680441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.680736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.680743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.680942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.680949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.681236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.681243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.681499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.681506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.681834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.681841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.682238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.682245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.682567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.682574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.682899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.682906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.683236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.683243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.683519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.683527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.683856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.683864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.684184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.684191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.684499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.684507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.684808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.684816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.685095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.685102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.685340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.685348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.685665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.685672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.686001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.686007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.686333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.686340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.686737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.686743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.687089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.687096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.687321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.687329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.687649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.687656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.687950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.687957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.688165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.688173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.688454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.688461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.688773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.688779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.689085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.689092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.689304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.689311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.689599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.689606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.689895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.689902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.690209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.690216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.690451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.690737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.690745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.690941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.690948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.691267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.691273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.691649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.691657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.691930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.691937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.692172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.692179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.692492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.692499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.692843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.692849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.693164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.693171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.693467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.693474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.693772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.693779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.694075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.694082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.694388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-06-10 14:38:18.694395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-06-10 14:38:18.694709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.694716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.695031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.695037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.695340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.695347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.695688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.695695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.696002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.696010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.696321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.696328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.696367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.696374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.696656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.696663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.696986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.696994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.697335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.697343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.697651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.697658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.697951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.697958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.698176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.698183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.698490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.698498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.698776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.698783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.699076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.699082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.699352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.699359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.699654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.699660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.699975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.699982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.700307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.700313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.700609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.700617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.700794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.700801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.701097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.701104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.701435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.701442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.701806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.701813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.702124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.702131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.702470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.702478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.702819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.702828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.703124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.703132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.703455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.703462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.703771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.703785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.704152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.704159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.704438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.704445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.704753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.704761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.704947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.704954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.705127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.705134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.705406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.705414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.705751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.705758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.706065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.706072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.706271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.706279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.706586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.706594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.706902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.706909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.707186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.707194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.707525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.707532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.708260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.708276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.708567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.708575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.708857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.708864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.709184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.709191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.709488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.709494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.709789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.709796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.710067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.710074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.710365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.710373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.710581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.710588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.710913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.710920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.711230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.711237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.711520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.711527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.711841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.711848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.712157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.712165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.712548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.712555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.712847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.712854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.713047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.713053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.713257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.713263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-06-10 14:38:18.713563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-06-10 14:38:18.713570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.713902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.713909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.714197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.714204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.714481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.714488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.714778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.714786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.714997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.715006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.715281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.715288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.715582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.715589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.715799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.715806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.715994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.716001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.716273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.716281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.716585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.716592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.716980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.716988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.717276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.717283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.717603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.717611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.717902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.717909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.718224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.718231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.718572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.718580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.718847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.718855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.719159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.719167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.719480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.719488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.719789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.719796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.720111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.720118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.720428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.720435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.720740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.720755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.721059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.721065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.721372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.721379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.721708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.721715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.722033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.722040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.722262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.722269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.722610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.722617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.722815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.722822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.723164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.723171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.723468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.723474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.723630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.723638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.723936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.723942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.724306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.724312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.724638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.724645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.724957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.724963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.725172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.725179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.725356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.725363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.725581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.725587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.725862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.725869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.726199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.726205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.726501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.726508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.726820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.726828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.727125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.727132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.727431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.727438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.727737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.727744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.727950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.727957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.728179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.728185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.728490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.728497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.728826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.728832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.729144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.729151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.729453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.729460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.729759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.729772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.729903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.729910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.730199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.730206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.730514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.730521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.730855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.730861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.731165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.731172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.731424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.731431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.731722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.731729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.732035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.732042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.732331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.732338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.732560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.732566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.732863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-06-10 14:38:18.732869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-06-10 14:38:18.733203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.733210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.733600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.733607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.733908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.733916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.734072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.734080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.734320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.734327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.734593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.734600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.734950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.734957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.735146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.735153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.735442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.735449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.735787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.735793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.735908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.735915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.736232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.736238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.736540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.736547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.736887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.736893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.737197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.737203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.737598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.737605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.737949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.737956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.738288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.738618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.738627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.738943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.738950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.739270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.739277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.739614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.739621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.739916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.739923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.740268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.740276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.740661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.740668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.740870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.740877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.741187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.741194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.741591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.741597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.741903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.741910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.742215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.742221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.742578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.742585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.742895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.742902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.743238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.743244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.743536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.743542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.743855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.743861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.744160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.744172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.744493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.744500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.744706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.744712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.745047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.745053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.745242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.745248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.745632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.745638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.745955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.745962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.746177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.746183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.746519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.746526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.746724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.746732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.747050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.747056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.747347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.747354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.747523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.747529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.747828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.747834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.748159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.748165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.748477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.748484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.748796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.748802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.749117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.749124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.749440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.749447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.749749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.749757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.750069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.750075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.750364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.750371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.750656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.750663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.750964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.750970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.751175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.751181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.751391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.751398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.751766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.751774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.752134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.752142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-06-10 14:38:18.752444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-06-10 14:38:18.752451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.752664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.752670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.752989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.752996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.753327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.753335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.753649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.753656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.753925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.753933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.754125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.754132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.754950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.754965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.755250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.755258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.755561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.755569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.755789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.755795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.756110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.756117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.756432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.756439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.756768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.756776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.757102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.757109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.757430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.757437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.757729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.757736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.758042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.758049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.758340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.758347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.758628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.758634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.758945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.758952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.759138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.759145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.759430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.759440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.759738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.759745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.760071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.760077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.760389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.760395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.760695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.760710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.760993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.760999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.761313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.761322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.761633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.761640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.761827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.761833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.762214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.762221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.762523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.762530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.762854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.762861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.763176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.763183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.763485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.763492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.763797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.763804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.764112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.764119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.764444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.764451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.764755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.764762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.765073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.765080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.765400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.765407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.765673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.765680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.765983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.765989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.766299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.766305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.766617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.766623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.766779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.766786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.767049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.767055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.767370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.767377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.767758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.767765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.768074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.768080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.768366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.768373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.768612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.768618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.768777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.768784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.769067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.769073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.769374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.769381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.769686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.769692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.770003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.770009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-06-10 14:38:18.770309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-06-10 14:38:18.770318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.770615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.770622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.770941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.770947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.771238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.771438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.771447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.771665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.771672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.772029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.772036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.772343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.772350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.772645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.772652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.772944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.772950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.773158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.773165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.773449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.773455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.773762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.773769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.774119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.774126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.774449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.774455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.774773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.774779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.774941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.774949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.775221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.775228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.775425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.775432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.775761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.775768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.776074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.776080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.776371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.776378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.776693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.776699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.776984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.776990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.777148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.777155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.777438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.777445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.777665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.777671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.777981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.777987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.778145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.778152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.778422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.778429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.778723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.778729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.779058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.779065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.779379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.779386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.779702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.779709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.780016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.780023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.780329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.780336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.780675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.780682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.780969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.780976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.781247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.781254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.781567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.781573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.781888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.781895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.782237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.782243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.782625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.782631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.782955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.782962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.783311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.783326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.783635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.783642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.783949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.783955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.784238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.784245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.784564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.784571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.784861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.784869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.785187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.785194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.785493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.785500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.785804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.785812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.786021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.786027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.786204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.786211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.786587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.786594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.786950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.786957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.787287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.787294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.787630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.787636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.787886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.787893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.788202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.788209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.788564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.788571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.788884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.788890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.789199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.789206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.789525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.789532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-06-10 14:38:18.789870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-06-10 14:38:18.789877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.790185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.790192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.790492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.790498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.790787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.790794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.791083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.791089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.791287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.791294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.791635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.791960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.791967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.792295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.792301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.792594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.792601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.792909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.792916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.793235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.793242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.793521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.793528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.793833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.793840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.794149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.794155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.794478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.794485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.794795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.794802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.795095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.795102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.795416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.795422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.795739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.795746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.796085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.796091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.796406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.796413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.796740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.796747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.797055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.797062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.797350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.797357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.797670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.797676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.798012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.798019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.798178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.798185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.798385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.798392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.798673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.798680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.799005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.799012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.799223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.799230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.799547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.799554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.799870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.799877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.800188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.800194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.800491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.800498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.800709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.800715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.801034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.801041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.801358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.801365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.801573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.801579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.801800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.801806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.802001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.802008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.802163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.802170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.802471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.802478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.802798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.802805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.803094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.803100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.803420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.803427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.803717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.803725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.804035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.804041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.804411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.804418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.804577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.804585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.804849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.804855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.805169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.805175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.805503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.805510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.805822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.805828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.806137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.806143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.806456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.806462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.806765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.806771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.807098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.807105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.807436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.807444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.807715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.807721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.808026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.808032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.808363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.808370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.808740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.808747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.808943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.808950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.809274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.809281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.809597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.809604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.809914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.809920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-06-10 14:38:18.810111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-06-10 14:38:18.810118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.810420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.810427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.810735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.810742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.811029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.811036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.811358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.811365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.811532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.811539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.811910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.811917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.812105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.812111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.812442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.812449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.812739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.812746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.813054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.813061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.813367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.813373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.813682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.813689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.813861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.813869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.814093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.814099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.814239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.814245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.814783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.814870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.815308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.815363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.815765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.815795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.816101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.816110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.816350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.816357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.816529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.816536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.816828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.816835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.817189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.817195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.817497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.817503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.817817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.817825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.818145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.818152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.818444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.818451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.818769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.818777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.819072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.819078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.819379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.819386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.819687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.819695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.820002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.820009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.820335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.820343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.820706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.820712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.820991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.820997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.821308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.821317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.821600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.821606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.821943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.821949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.822099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.822106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.822448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.822455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.822671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.822677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.823007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.823013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.823331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.823338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.823647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.823653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.823945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.823951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.824306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.824312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.824601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.824608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.824919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.824925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.825141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.825148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.825450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.825458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.825763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.825771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.826073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.826080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.826271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.826278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.826593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.826601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.826910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.826916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.827232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.827240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.827538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.827545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.827832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.827839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.828157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.828163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.828446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.828454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.828689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-06-10 14:38:18.828696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-06-10 14:38:18.829026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.829033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.829344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.829352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.829657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.829664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.829961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.829969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.830257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.830263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.830565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.830572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.830882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.830889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.831057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.831064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.831339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.831347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.831738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.831752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.832061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.832068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.832361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.832368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.832690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.832697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.833018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.833024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.833412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.833419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.833759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.833766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.834099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.834105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.834440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.834447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.834629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.834636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.834922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.834929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.835222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.835229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.835538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.835545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.835852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.835858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.836152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.836158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.836456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.836463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.836772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.836779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.837093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.837100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.837389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.837395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.837709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.837716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.838022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.838029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.838350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.838358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.838668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.838674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.838963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.838970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.839286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.839292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.839490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.839496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.839843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.839850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.840152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.840159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.840497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.840503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.840855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.840861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.841174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.841180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.841457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.841464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.841668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.841674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.842023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.842029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.842211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.842218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.842564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.842571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.842884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.842890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.843219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.843227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.843556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.843564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.843854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.843861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.844170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.844178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.844496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.844502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.844826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.844832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.845125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.845131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.845427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.845434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.845620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.845626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.845930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.845937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.846281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.846287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.846580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.846587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.846898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.846904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.847227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.847233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.847452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.847459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.847785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.847792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.848090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.848097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.848391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.848398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.848734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.848740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.849049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.849055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-06-10 14:38:18.849377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-06-10 14:38:18.849384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.849672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.849679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.849980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.849986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.850176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.850183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.850535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.850542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.850687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.850694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.850961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.850968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.851286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.851293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.851604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.851611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.851926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.851932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.852225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.852231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.852563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.852570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.852885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.852891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.853217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.853224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.853506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.853513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.853831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.853838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.854088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.854095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.854402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.854409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.854613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.854620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.854891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.854897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.855208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.855214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.855530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.855537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.855833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.855839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.856148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.856156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.856468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.856475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.856777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.856783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.857085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.857092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.857420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.857427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.857741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.857747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.857963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.857970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-06-10 14:38:18.858306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-06-10 14:38:18.858313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.858634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.858642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.858952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.858959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.859249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.859256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.859450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.859457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.859762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.859768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.860091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.860099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.860398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.860404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.860718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.860725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.861088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.861094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.861383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.861390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.861702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.861709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.862015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.862021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.862088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.862095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.862369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.862376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.862694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.862700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.862999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.863006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.863294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.863302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.863582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.863589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.863897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.863904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.864208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.864215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.864378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.864386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.864589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.864596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.864885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.864893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.865186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.865193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.865485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.865492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.865784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.865791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.866095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.866102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.866387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.866394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.866716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.866723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.867033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.867039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.867405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.867411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.867704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.867710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.867989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.867996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.868272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.868279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.868610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-06-10 14:38:18.868617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-06-10 14:38:18.868925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.868932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.869240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.869246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.869534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.869541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.869850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.869857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.870185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.870191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.870487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.870494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.870811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.870817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.871126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.871132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.871428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.871435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.871726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.871732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.872057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.872063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.872363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.872371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.872694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.872701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.873036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.873043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.873422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.873428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.873709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.873716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.874020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.874027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.874321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.874328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.874648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.874654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.874962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.874969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.875296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.875302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.875614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.875621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.875935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.875941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.876234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.876241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.876628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.876634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.876918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.876932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.877244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.877534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.877541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.877842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.877849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.878157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.878164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.878423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.878431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.878590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.878597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.878920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.878926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.879276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.879283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.879481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.879488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.879837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.879844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.880179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.880186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.880495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-06-10 14:38:18.880502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-06-10 14:38:18.880811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.880818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.881115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.881122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.881431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.881438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.881792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.881799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.882019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.882026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.882339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.882347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.882666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.882672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.882961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.882967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.883289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.883295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.883508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.883515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.883828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.883834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.884117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.884123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.884445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.884451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.884763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.884769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.885100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.885107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.885327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.885334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.885641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.885647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.885961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.885968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.886175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.886181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.886459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.886466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.886774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.886780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.887089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.887096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.887389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.887397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.887704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.887710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.888026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.888033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.888331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.888338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.888682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.888690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.888979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.888985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.889300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.889306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.889503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.889510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.889796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.889803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.890111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.890118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.890426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.890432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.890767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.890774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.891064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.891071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.891372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.891379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.891690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.891696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-06-10 14:38:18.892006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-06-10 14:38:18.892012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.892167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.892174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.892555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.892562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.892838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.892845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.893146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.893152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.893348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.893354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.893764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.893771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.894066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.894073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.894381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.894388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.894628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.894634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.894923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.894930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.895241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.895247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.895527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.895534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.895862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.895869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.896162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.896169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.896478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.896485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.896684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.896691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.896840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.896847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.897052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.897059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.897226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.897233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.897542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.897549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.897904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.897912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.898098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.898104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.898277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.898283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.898595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.898601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.898800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.898806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.899117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.899123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.899432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.899439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.899602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.899609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.899910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.899918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.900202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.900209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.900535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.900548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.900860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.900866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.901141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.901148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.901359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.901366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.901547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.901553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.901876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.901883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.902212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.902218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.902496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-06-10 14:38:18.902503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-06-10 14:38:18.902795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.902801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.903071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.903077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.903408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.903415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.903705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.903713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.904025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.904032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.904344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.904642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.904648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.904917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.904923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.905257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.905264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.905565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.905572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.905872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.905878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.906188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.906194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.906516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.906523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.906838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.906845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.907164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.907170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.907475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.907483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.907652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.907659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.907989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.907995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.908232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.908238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.908570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.908577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.908872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.908879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.909209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.909216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.909428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.909435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.909748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.909754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.910086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.910092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.910415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.910422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.910733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.910740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.911029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.911035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.911324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.911331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.911610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.911616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.575 qpair failed and we were unable to recover it. 00:29:41.575 [2024-06-10 14:38:18.911904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.575 [2024-06-10 14:38:18.911912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.912118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.912124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.912289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.912295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.912593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.912600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.912890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.912897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.913101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.913108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.913297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.913304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.913644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.913651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.913938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.913945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.914257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.914264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.914576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.914583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.914887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.914894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.915191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.915198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.915528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.915535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.915824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.915831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.916182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.916188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.916390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.916397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.916700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.916706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.917016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.917022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.917340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.917348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.917691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.917697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.918005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.918011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.918187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.918193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.918533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.918540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.918860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.918866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.918984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.918991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.919266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.919272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.919598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.919605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.919912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.919919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.920209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.920216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.920544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.920551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.920731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.920738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.921079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.921085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.921308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.921316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.921617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.921623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.921864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.921870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.922188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.922195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.922475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.922483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-06-10 14:38:18.922769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.576 [2024-06-10 14:38:18.922776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.922937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.922944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.923331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.923339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.923636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.923643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.924007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.924014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.924304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.924312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.924620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.924627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.924933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.924939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.925261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.925267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.925580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.925587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.925904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.925910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.926198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.926205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.926503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.926511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.926838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.926845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.927002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.927009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.927189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.927196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.927557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.927564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.927785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.927791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.928053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.928059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.928377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.928384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.928682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.928688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.929013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.929019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.929308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.929318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.929609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.929615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.929904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.929910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.930088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.930095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.930397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.930404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.930708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.930714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.931009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.931016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.931337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.931344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.931673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.931681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.932003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.932010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.932312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.932321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.932616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.932623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.932902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.932909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.933225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.933231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.933521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.933528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.577 [2024-06-10 14:38:18.933884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.577 [2024-06-10 14:38:18.933890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.577 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.934169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.934175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.934398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.934405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.934764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.934771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.935095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.935102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.935412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.935421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.935708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.935715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.936024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.936031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.936218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.936225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.936540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.936548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.936851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.936857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.937161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.937168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.937366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.937372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.937693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.937700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.938024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.938031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.938324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.938331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.938648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.938655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.938974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.938981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.939141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.939147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.939479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.939486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.939687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.939699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.939975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.939982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.940284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.940290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.940602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.940609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.940894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.940901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.941193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.941200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.941533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.941540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.941713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.941720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.942031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.942037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.942343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.942350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.942666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.942672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.942960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.942966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.943288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.943294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.943589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.943596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.943952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.943958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.944355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.944362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.944762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.944768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.945077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.945084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.945374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.578 [2024-06-10 14:38:18.945381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.578 qpair failed and we were unable to recover it. 00:29:41.578 [2024-06-10 14:38:18.945535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.945542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.945908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.945915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.946232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.946239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.946520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.946528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.946832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.946838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.947149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.947156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.947468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.947476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.947760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.947767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.948051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.948058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.948252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.948259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.948563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.948570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.948891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.948898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.949223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.949231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.949436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.949444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.949769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.949777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.950098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.950105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.950339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.950346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.950565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.950572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.950840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.950847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.951140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.951147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.951326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.951334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.951653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.951660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.951853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.952043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.952050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.952340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.952348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.952688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.952695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.953005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.953012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.953238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.953245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.953560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.953567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.953874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.953881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.954187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.954195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.954488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.954496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.954789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.954796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.955096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.955103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.955373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.955380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.955745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.955751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.956044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.956051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.956361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.956367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.579 qpair failed and we were unable to recover it. 00:29:41.579 [2024-06-10 14:38:18.956686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.579 [2024-06-10 14:38:18.956693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.956915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.956921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.957246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.957253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.957491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.957498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.957824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.957831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.958128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.958135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.958444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.958451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.958772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.958779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.959062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.959070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.959274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.959281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.959466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.959473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.959778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.959785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.960097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.960105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.960162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.960169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.960466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.960473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.960776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.960782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.961097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.961104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.961386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.961393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.961722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.961728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.961919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.961926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.962273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.962281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.962579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.962587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.962889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.962896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.963189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.963195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.963498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.963505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.963874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.963881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.964174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.964181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.964513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.964519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.964790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.964796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.964918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.964924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.965113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.965120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.965425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.965431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.965604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.965611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.580 qpair failed and we were unable to recover it. 00:29:41.580 [2024-06-10 14:38:18.965893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.580 [2024-06-10 14:38:18.965899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.966080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.966087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.966388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.966396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.966720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.966726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.967038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.967045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.967351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.967359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.967643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.967649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.967950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.967957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.968266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.968273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.968428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.968435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.968757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.968764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.969101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.969107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.969434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.969442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.969750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.969757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.970049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.970055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.970361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.970369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.970456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.970462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.970903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.970909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.971202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.971209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.971562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.971569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.971895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.971901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.972214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.972220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.972510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.972516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.972823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.972829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.973134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.973141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.973470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.973477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.973774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.973781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.974112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.974119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.974419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.974425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.974748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.974755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.975054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.975060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.975352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.975359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.975566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.975573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.975904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.975910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.976212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.976218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.976578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.976584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.976898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.976905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.977192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.977198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.581 qpair failed and we were unable to recover it. 00:29:41.581 [2024-06-10 14:38:18.977488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.581 [2024-06-10 14:38:18.977495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.977790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.977797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.978074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.978080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.978367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.978374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.978702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.978709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.979006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.979012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.979323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.979330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.979505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.979512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.979807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.979813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.980124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.980130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.980311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.980321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.980633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.980639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.980940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.980946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.981235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.981241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.981544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.981551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.981759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.981766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.982083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.982090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.982282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.982290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.982605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.982612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.982905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.982912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.983239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.983246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.983554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.983561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.983880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.983887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.984195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.984202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.984487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.984494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.984807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.984814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.985120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.985127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.985435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.985443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.985771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.985777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.986069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.986076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.986350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.986356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.986746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.986752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.987060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.987066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.987237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.987245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.987567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.987573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.987894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.987900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.988215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.988221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.988536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.988543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.582 [2024-06-10 14:38:18.988847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.582 [2024-06-10 14:38:18.988853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.582 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.989139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.989154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.989355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.989362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.989551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.989557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.989859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.989865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.990186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.990194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.990487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.990494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.990791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.990797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.991107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.991113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.991413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.991420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.991788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.991794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.992087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.992094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.992406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.992413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.992699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.992706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.992905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.992912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.993213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.993220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.993535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.993542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.993818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.993825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.994161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.994168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.994482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.994492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.994788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.994795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.994960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.994967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.995157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.995164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.995460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.995467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.995816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.995823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.996112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.996126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.996413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.996420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.996805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.996813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.997090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.997096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.997406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.997413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.997724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.997730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.998015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.998029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.998338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.998345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.998646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.998653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.998827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.999010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.999017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.999288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.999294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.999687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.999694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:18.999984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.583 [2024-06-10 14:38:18.999991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.583 qpair failed and we were unable to recover it. 00:29:41.583 [2024-06-10 14:38:19.000278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.000284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.000594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.000601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.000911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.000917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.001110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.001117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.001342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.001348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.001542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.001549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.001856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.001862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.002177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.002184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.002490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.002497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.002823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.002830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.003138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.003144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.003447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.003454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.003744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.003751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.004054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.004061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.004274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.004281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.004508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.004515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.004816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.004823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.005004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.005011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.005322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.005329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.005628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.005635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.005840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.005847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.006306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.006313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.006609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.006617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.006955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.006961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.007257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.007264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.007589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.007596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.007895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.007901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.008201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.008207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.008578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.008585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.008844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.008851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.009160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.009166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.009481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.009488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.009792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.584 [2024-06-10 14:38:19.009799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.584 qpair failed and we were unable to recover it. 00:29:41.584 [2024-06-10 14:38:19.010111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.010118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.010427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.010434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.010726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.010733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.011017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.011023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.011307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.011313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.011619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.011626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.011917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.011923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.012223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.012229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.012538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.012545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.012830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.012837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.013039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.013046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.013355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.013362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.013682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.013689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.014002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.014009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.014205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.014212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.014513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.014520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.014803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.014811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.015156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.015162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.015457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.015464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.015816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.015823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.016107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.016113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.016433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.016440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.016726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.016734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.017031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.017037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.017355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.017362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.017723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.017729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.018018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.018026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.018202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.018211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.018402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.018409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.018717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.018724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.019055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.019063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.019360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.019367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.019639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.019646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.019968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.019974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.020263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.020270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.020574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.020581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.020768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.020776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.021107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.585 [2024-06-10 14:38:19.021114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.585 qpair failed and we were unable to recover it. 00:29:41.585 [2024-06-10 14:38:19.021442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.021448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.021779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.021786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.022093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.022100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.022388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.022395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.022763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.022769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.023078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.023085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.023396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.023403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.023716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.023723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.023913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.023920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.024233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.024240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.024550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.024557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.024870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.024876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.025173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.025180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.025402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.025409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.025727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.025734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.026040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.026046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.026353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.026360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.026687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.026693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.027002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.027008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.027180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.027188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.027484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.027491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.027790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.027797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.028105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.028111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.028418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.028425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.028750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.028756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.029071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.029077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.029258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.029265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.029573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.029580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.029878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.029884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.030197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.030204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.030499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.030506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.030804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.030810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.030990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.030997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.031277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.031284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.031602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.031610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.031925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.031932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.032264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.032271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.586 [2024-06-10 14:38:19.032596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.586 [2024-06-10 14:38:19.032604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.586 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.032909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.032916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.033224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.033230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.033537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.033543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.033870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.033877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.034185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.034191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.034490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.034497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.034700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.034707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.035009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.035016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.035191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.035198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.035389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.035396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.035783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.035790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.036090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.036096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.036391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.036398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.036714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.036721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.037023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.037030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.037340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.037347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.037528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.037535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.037827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.037833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.038157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.038165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.038484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.038491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.038724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.038731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.038941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.038948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.039137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.039143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.039459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.039466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.039789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.039796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.040110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.040117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.040454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.040460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.040762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.040768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.041082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.041088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.041280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.041287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.041572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.041578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.041911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.041917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.042318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.042324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.042637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.042644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.042974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.042980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.043268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.043275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.043584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.043591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.587 qpair failed and we were unable to recover it. 00:29:41.587 [2024-06-10 14:38:19.043905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.587 [2024-06-10 14:38:19.043912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.044203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.044210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.044487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.044494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.044812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.044820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.045129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.045137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.045446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.045452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.045755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.045761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.046083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.046089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.046374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.046387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.046604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.046611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.046941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.046947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.047103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.047110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.047382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.047389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.047707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.047713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.048023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.048030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.048358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.048365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.048671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.048678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.048969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.048976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.049191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.049197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.049510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.049518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.049821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.049828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.050125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.050133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.050414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.050421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.050751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.050757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.051052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.051059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.051348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.051356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.051648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.051654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.051972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.051979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.052288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.052294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.052594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.052601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.052893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.052900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.053208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.053214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.053548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.053555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.053852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.053859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.054153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.054159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.054479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.054486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.054703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.054709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.055012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.055019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.588 [2024-06-10 14:38:19.055389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.588 [2024-06-10 14:38:19.055396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.588 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.055685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.055693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.056007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.056013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.056223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.056229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.056520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.056527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.056836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.056842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.057143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.057150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.057458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.057465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.057763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.057771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.058085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.058091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.058392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.058399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.058702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.058708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.059018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.059024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.059193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.059199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.059514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.059520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.059688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.059695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.060020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.060027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.060240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.060247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.060563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.060569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.060862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.060868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.061074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.061081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.061385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.061391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.061710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.061717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.061907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.061916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.062190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.062197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.062486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.062493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.062803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.062809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.062999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.063006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.063416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.063423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.063710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.063717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.064023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.064029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.064323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.064330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.064633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.589 [2024-06-10 14:38:19.064639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.589 qpair failed and we were unable to recover it. 00:29:41.589 [2024-06-10 14:38:19.064948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.064955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.065278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.065284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.065627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.065634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.065938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.065945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.066245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.066251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.066556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.066563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.066800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.066806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.067102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.067115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.067395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.067401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.067574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.067582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.067768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.067775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.068098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.068104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.068430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.068438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.068745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.068751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.069062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.069069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.069404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.069411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.069731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.069738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.070066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.070073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.070370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.070376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.070686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.070693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.070979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.070986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.071189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.071196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.071492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.071498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.071667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.071674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.071953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.071959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.072161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.072167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.072439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.072445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.072736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.072742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.073058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.073065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.073359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.073365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.073535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.073544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.073824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.073831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.074000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.074007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.074234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.074240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.074544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.074551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.074846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.074852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.075151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.075157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.075340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.075347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.590 [2024-06-10 14:38:19.075680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.590 [2024-06-10 14:38:19.075686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.590 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.075996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.076003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.076337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.076344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.076653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.076659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.076957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.076963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.077178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.077185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.077508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.077515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.077835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.077841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.078193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.078199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.078428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.078435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.078751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.078757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.078955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.078961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.079278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.079284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.079572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.079579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.079750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.079757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.080085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.080092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.080437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.080444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.080741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.080747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.081037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.081043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.081207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.081215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.081421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.081427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.081783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.081790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.082093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.082099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.082410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.082416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.082725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.082731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.083018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.083025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.083233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.083241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.083547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.083554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.083909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.083916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.084103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.084110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.084411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.084418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.084710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.084718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.085023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.085031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.085339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.085346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.085539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.085545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.085748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.085755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.086076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.086083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.086384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.086391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.591 [2024-06-10 14:38:19.086691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.591 [2024-06-10 14:38:19.086697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.591 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.087020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.087026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.087317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.087324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.087620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.087626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.087928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.087935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.088065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.088072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.088383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.088390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.088598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.088604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.088916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.088923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.089232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.089238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.089429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.089436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.089733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.089739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.090049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.090056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.090344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.090351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.090665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.090671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.090970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.090977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.091214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.091222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.091443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.091449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.091765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.091771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.092120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.092127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.092462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.092469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.092778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.092785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.093097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.093103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.093325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.093332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.093627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.093633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.093787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.093794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.094104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.094111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.094445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.094452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.094740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.094747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.095026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.095032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.095325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.095332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.095523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.095530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.095825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.095832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.096136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.096142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.096330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.096339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.096643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.096650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.096839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.096846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.097113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.097120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.097453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.592 [2024-06-10 14:38:19.097459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.592 qpair failed and we were unable to recover it. 00:29:41.592 [2024-06-10 14:38:19.097753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.097760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.098087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.098093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.098403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.098410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.098730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.098737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.099048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.099055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.099236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.099244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.099445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.099451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.099783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.099789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.100102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.100109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.100322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.100330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.100640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.100646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.100945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.100952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.101281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.101287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.101462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.101469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.101810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.101816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.102101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.102109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.102402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.102409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.102697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.102704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.103030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.103036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.103339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.103345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.103622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.103628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.103919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.103925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.104245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.104252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.104537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.104544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.104826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.104833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.105129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.105136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.105456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.105463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.105816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.105823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.106116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.106123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.106313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.106324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.106627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.106634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.106941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.106947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.107245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.593 [2024-06-10 14:38:19.107252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.593 qpair failed and we were unable to recover it. 00:29:41.593 [2024-06-10 14:38:19.107564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.107570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.107877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.107884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.108091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.108099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.108384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.108391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.108704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.108711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.109020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.109027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.109336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.109344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.109644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.109650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.110028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.110035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.110344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.110351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.110536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.110552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.110848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.110855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.111179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.111185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.111367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.111374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.111557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.111564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.111754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.111760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.112105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.112112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.112381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.112388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.112707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.112714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.112930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.112937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.113248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.113254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.113575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.113582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.113892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.113898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.114188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.114195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.114447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.114454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.114754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.114761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.115070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.115076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.115372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.115378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.115589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.115595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.115865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.115872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.116198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.116206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.116516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.116523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.116852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.116859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.117039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.117046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.117209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.117217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.117512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.117519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.117674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.117680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.117889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.117895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.594 qpair failed and we were unable to recover it. 00:29:41.594 [2024-06-10 14:38:19.118174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.594 [2024-06-10 14:38:19.118181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.118381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.118388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.118722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.118728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.119048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.119055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.119362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.119370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.119680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.119687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.119977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.119983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.120293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.120299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.120608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.120615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.120924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.120930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.121237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.121244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.121561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.121568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.121887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.121893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.122109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.122116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.122407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.122414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.122720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.122726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.123033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.123040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.123345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.123351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.123647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.123654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.123972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.123978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.124263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.124270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.124614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.124622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.124915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.124922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.125229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.125235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.125553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.125881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.125888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.126185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.126192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.126559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.126566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.126941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.126948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.127272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.127280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.127591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.127598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.127796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.127802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.128084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.128091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.128397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.128403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.128575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.128582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.128951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.128958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.129277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.129283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.129585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.129591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.129953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.595 [2024-06-10 14:38:19.129959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.595 qpair failed and we were unable to recover it. 00:29:41.595 [2024-06-10 14:38:19.130288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.130294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.130601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.130608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.130917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.130923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.131213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.131220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.131546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.131552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.131841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.131850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.132155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.132162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.132473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.132480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.132797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.132804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.133088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.133095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.133394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.133401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.133719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.133726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.133906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.133912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.134204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.134211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.134539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.134546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.134735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.134743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.135056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.135064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.135375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.135381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.135675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.135681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.135986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.135993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.136299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.136306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.136618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.136625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.136921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.136928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.137230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.137237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.137556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.137563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.137875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.137881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.138232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.138239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.138547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.138555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.138870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.138877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.139179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.139186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.139492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.139500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.139798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.139806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.140111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.140119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.140464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.140471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.140797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.140805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.141087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.141095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.141381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.141389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.141700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.596 [2024-06-10 14:38:19.141706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.596 qpair failed and we were unable to recover it. 00:29:41.596 [2024-06-10 14:38:19.141997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.142003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.142346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.142352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.142660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.142666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.142974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.142981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.143362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.143370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.143692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.143698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.144008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.144014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.144325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.144333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.144620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.144627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.144932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.144938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.145245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.145251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.145552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.145559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.145853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.145860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.146152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.146158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.146467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.146474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.146790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.146797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.147085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.147093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.147382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.147389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.147705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.147712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.148030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.148036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.148320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.148327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.148541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.148548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.597 [2024-06-10 14:38:19.148719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.597 [2024-06-10 14:38:19.148725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.597 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.149032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.149040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.149229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.149235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.149503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.149510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.149822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.149829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.150126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.150133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.150431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.150437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.150729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.150735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.150892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.150899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.151162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.151169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.151455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.151462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.151760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.151768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.152081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.152088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.152421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.152428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.152794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.152802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.153085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.153092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.153384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.153391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.153680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.153687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.153996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.154002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.154349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.154356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.154618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.154625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.154959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.154966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.155307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.155317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.155526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.155533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.155750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.155756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.156069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.156077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.874 qpair failed and we were unable to recover it. 00:29:41.874 [2024-06-10 14:38:19.156371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.874 [2024-06-10 14:38:19.156377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.156677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.156683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.156989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.156996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.157304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.157311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.157630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.157638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.157935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.157942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.158122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.158129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.158422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.158429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.158757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.158764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.159069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.159075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.159386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.159393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.159711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.159717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.160043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.160050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.160346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.160353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.160515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.160522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.160905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.160911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.161235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.161242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.161427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.161434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.161607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.161614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.161887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.161893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.162213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.162220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.162512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.162519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.162838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.162844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.163152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.163158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.163485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.163492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.163789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.163796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.164104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.164110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.164289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.164295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.164548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.164554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.164859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.164866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.165150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.165157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.165391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.165398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.165783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.165791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.165977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.165984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.166303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.166310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.166669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.166676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.166966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.166972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.167066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.875 [2024-06-10 14:38:19.167073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.875 qpair failed and we were unable to recover it. 00:29:41.875 [2024-06-10 14:38:19.167341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.167347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.167663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.167672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.167886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.167893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.168230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.168237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.168536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.168543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.168859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.168865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.169174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.169180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.169473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.169480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.169792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.169799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.170080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.170086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.170413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.170419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.170719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.170726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.171024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.171031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.171333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.171340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.171647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.171653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.171844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.171850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.172192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.172198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.172354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.172361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.172654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.172661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.172971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.172977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.173181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.173187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.173406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.173413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.173741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.173748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.173907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.173914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.174301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.174307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.174495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.174502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.174715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.174722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.175064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.175071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.175361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.175368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.175676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.175682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.175870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.175877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.176187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.176193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.176609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.176615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.176911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.176917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.177244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.177252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.177467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.177473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.177786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.177792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.876 [2024-06-10 14:38:19.178111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.876 [2024-06-10 14:38:19.178118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.876 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.178436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.178443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.178747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.178754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.179056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.179063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.179412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.179420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.179683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.179690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.179997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.180004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.180294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.180301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.180588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.180595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.180885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.180891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.181194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.181202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.181526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.181533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.181842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.181848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.182110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.182116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.182427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.182433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.182733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.182740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.183045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.183052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.183328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.183335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.183637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.183644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.183838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.183844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.184134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.184142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.184455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.184462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.184761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.184768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.185070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.185077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.185385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.185391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.185724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.185731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.186025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.186032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.186345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.186352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.186643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.186650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.186963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.186969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.187261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.187267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.187581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.187589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.187896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.187903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.188211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.188217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.188572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.188579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.188875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.188881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.189194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.189200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.189504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.189510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.877 [2024-06-10 14:38:19.189690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.877 [2024-06-10 14:38:19.189697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.877 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.190044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.190050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.190366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.190373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.190676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.190682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.190967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.190973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.191270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.191276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.191590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.191597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.191929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.191936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.192248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.192254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.192565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.192572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.192880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.192887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.193180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.193187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.193393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.193400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.193715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.193722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.194032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.194039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.194339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.194347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.194635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.194642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.194800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.194808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.195162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.195168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.195494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.195501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.195816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.195823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.196123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.196130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.196447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.196454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.196832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.196838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.197132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.197138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.197463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.197470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.197841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.197848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.198155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.198161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.198468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.198474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.198636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.198643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.878 [2024-06-10 14:38:19.199019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.878 [2024-06-10 14:38:19.199026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.878 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.199284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.199291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.199603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.199611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.199939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.199948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.200258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.200265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.200571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.200578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.200864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.200870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.201027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.201033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.201244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.201250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.201592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.201598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.201876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.201883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.202078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.202085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.202292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.202298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.202628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.202635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.202923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.202930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.203087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.203094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.203468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.203475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.203805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.203811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.204140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.204147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.204481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.204488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.204797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.204804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.879 [2024-06-10 14:38:19.205115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.879 [2024-06-10 14:38:19.205121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.879 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.205252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.205258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.205527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.205534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.205875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.205882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.206183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.206190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.206496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.206503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.206795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.206801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.207168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.207478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.207485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.207791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.207798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.208128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.208135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.208442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.208448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.208757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.208765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.208927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.208935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.209102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.209109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.209430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.209436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.209738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.209745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.210049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.210055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.210351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.210357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.210656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.210670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.210975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.210981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.211273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.211279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.211582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.211591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.211905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.211912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.212219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.212226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.212619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.880 [2024-06-10 14:38:19.212625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.880 qpair failed and we were unable to recover it. 00:29:41.880 [2024-06-10 14:38:19.212934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.212940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.213169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.213175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.213379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.213386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.213753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.213760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.214084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.214092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.214385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.214392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.214655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.214662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.214968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.214975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.215262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.215269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.215625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.215631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.215919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.215935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.216219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.216225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.216529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.216536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.216847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.216853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.217130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.217137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.217396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.217403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.217729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.217736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.218060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.218067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.218379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.218387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.218686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.218692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.218979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.218993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.219332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.219339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.219620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.219627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.219812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.219820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.220094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.220100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.881 [2024-06-10 14:38:19.220290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.881 [2024-06-10 14:38:19.220297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.881 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.220580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.220586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.220909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.220915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.221204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.221220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.221528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.221534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.221829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.221836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.222145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.222152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.222479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.222487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.222772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.222779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.223096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.223434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.223441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.223728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.223736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.223942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.223949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.224253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.224259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.224557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.224564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.224859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.224866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.225172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.225179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.225486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.225493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.225832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.225839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.226187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.226194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.226352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.226360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.226531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.226538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.226736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.226742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.227067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.227074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.227384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.227391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.882 [2024-06-10 14:38:19.227715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.882 [2024-06-10 14:38:19.227722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.882 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.228028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.228322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.228329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.228653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.228659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.228930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.228936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.229152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.229158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.229358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.229365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.229655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.229661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.230009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.230015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.230301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.230308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.230601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.230608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.230807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.230814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.231162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.231169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.231477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.231484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.231783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.231790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.232103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.232110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.232420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.232427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.232632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.232639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.232841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.232848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.233026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.233032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.233325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.233331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.233637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.233644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.233977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.233983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.234307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.234317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.234619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.234626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.883 qpair failed and we were unable to recover it. 00:29:41.883 [2024-06-10 14:38:19.234917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.883 [2024-06-10 14:38:19.234924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.235215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.235222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.235515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.235522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.235830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.235837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.235995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.236002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.236273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.236280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.236597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.236604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.236937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.236944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.237271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.237278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.237579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.237585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.237908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.237915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.238123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.238129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.238320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.238327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.238484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.238492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.238788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.238794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.239004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.239010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.239332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.239339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.239683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.239690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.239999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.240005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.240325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.240332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.240642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.240648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.240938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.240944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.241252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.241258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.241420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.241428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.241766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.241773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.884 [2024-06-10 14:38:19.242163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.884 [2024-06-10 14:38:19.242170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.884 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.242479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.242486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.242881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.242887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.243188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.243195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.243415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.243421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.243631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.243637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.243953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.243960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.244290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.244296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.244456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.244463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.244721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.244727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.245010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.245017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.245306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.245313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.245632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.245639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.245803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.245810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.246186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.246192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.246500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.246507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.246809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.246817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.246993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.246999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.247242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.247249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.247553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.247560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.247854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.247860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.248177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.248184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.885 qpair failed and we were unable to recover it. 00:29:41.885 [2024-06-10 14:38:19.248343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.885 [2024-06-10 14:38:19.248350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.248633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.248640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.248940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.248947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.249227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.249234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.249539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.249546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.249869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.249876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.250165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.250172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.250477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.250484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.250757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.250764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.250975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.250982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.251276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.251282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.251582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.251589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.251769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.251775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.252068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.252074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.252283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.252290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.252577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.252584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.252861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.252868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.253170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.253177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.253489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.253496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.253788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.253795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.254107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.254115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.254433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.254441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.254732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.254740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.255048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.886 [2024-06-10 14:38:19.255054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.886 qpair failed and we were unable to recover it. 00:29:41.886 [2024-06-10 14:38:19.255363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.255370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.255663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.255669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.255966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.255973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.256290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.256297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.256598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.256605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.256929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.256935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.257117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.257123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.257469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.257475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.257761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.257767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.257932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.257938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.258239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.258248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.258422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.258430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.258725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.258732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.259021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.259029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.259221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.259227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.259461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.259468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.259825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.259831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.260114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.260120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.260422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.260429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.260722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.260728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.261033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.261039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.261328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.261335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.261624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.261630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.261938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.261944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.262277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.262283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.262583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.262591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.262779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.262786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.262940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.262947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.263276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.263283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.263616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.887 [2024-06-10 14:38:19.263623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.887 qpair failed and we were unable to recover it. 00:29:41.887 [2024-06-10 14:38:19.263921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.263928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.264253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.264260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.264560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.264568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.264907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.264914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.265248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.265255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.265566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.265573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.265877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.265885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.266175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.266182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.266538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.266545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.266833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.266841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.267147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.267154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.267466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.267474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.267756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.267763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.268077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.268084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.268399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.268407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.268698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.268705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.268991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.268999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.269188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.269194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.269474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.269481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.269785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.269792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.270077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.270085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.270397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.270404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.270709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.270716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.271017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.271023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.271327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.271334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.271542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.271549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.271761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.271768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.272050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.272057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.272344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.272351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.272547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.272554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.272831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.272837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.273166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.273172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.273482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.273488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.273805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.888 [2024-06-10 14:38:19.273812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.888 qpair failed and we were unable to recover it. 00:29:41.888 [2024-06-10 14:38:19.274084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.274090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.274421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.274427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.274724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.274730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.275063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.275069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.275342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.275349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.275668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.275675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.275967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.275973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.276299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.276305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.276601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.276609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.276898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.276905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.277196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.277202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.277401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.277409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.277718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.277724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.278012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.278019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.278342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.278349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.278665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.278672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.278991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.278998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.279283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.279290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.279618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.279625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.279911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.279918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.280226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.280232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.280523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.280530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.280832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.280839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.281152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.281159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.281474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.281480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.281776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.281782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.282079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.282087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.282259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.282265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.282532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.282539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.282847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.282854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.283161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.283169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.283339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.283348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.283686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.283693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.283984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.283990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.284180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.284186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.284481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.284488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.284812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.284818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.285141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.285148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.889 qpair failed and we were unable to recover it. 00:29:41.889 [2024-06-10 14:38:19.285495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.889 [2024-06-10 14:38:19.285502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.285711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.285718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.285906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.285913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.286198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.286204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.286495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.286502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.286709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.286716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.287110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.287118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.287418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.287425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.287728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.287734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.287937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.287943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.288259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.288265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.288586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.288592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.288682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.288688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.288960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.288967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.289277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.289283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.289645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.289651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.289943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.289950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.290259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.290265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.290568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.290575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.290948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.290954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.290992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.290998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.291338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.291345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.291532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.291539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.291858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.291865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.292178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.292184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.292514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.292520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.292909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.292915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.293230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.293236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.293554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.293562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.293858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.293865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.294203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.294209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.294493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.294500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.294823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.294829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.295120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.295126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.295446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.295452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.295670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.295676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.295976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.890 [2024-06-10 14:38:19.295984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.890 qpair failed and we were unable to recover it. 00:29:41.890 [2024-06-10 14:38:19.296291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.296297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.296605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.296612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.296927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.296933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.297260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.297267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.297603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.297610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.297912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.297918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.298206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.298212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.298531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.298539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.298870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.298877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.299166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.299172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.299493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.299499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.299797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.299805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.300084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.300091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.300382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.300389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.300689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.300696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.300998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.301004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.301290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.301296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.301606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.301613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.302006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.302013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.302340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.302346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.302668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.302674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.302867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.302873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.303138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.303144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.303345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.303352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.303689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.303695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.303989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.303996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.304306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.304312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.304618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.304625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.304940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.304947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.305249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.305256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.891 qpair failed and we were unable to recover it. 00:29:41.891 [2024-06-10 14:38:19.305567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.891 [2024-06-10 14:38:19.305574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.305888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.305896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.306215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.306221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.306507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.306514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.306841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.306847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.307144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.307151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.307470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.307477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.307796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.307803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.308111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.308117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.308273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.308280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.308558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.308565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.308866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.308873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.309185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.309193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.309523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.309530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.309742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.309749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.310018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.310025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.310337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.310344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.310664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.310670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.310958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.310965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.311253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.311259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.311557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.311564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.311917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.311923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.312212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.312219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.312541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.312547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.312836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.312843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.313150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.313157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.313463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.313470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.313767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.313774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.314126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.314134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.314442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.314449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.314751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.314757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.315085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.315091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.315395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.315402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.315695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.315701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.315998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.316005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.316318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.316325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.316634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.316641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.316958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.316965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.892 qpair failed and we were unable to recover it. 00:29:41.892 [2024-06-10 14:38:19.317318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.892 [2024-06-10 14:38:19.317325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.317634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.317641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.317954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.317960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.318274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.318282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.318612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.318620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.318946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.318953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.319263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.319270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.319484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.319491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.319814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.319821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.320166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.320173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.320484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.320491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.320769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.320776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.321049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.321056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.321355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.321362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.321665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.321671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.322022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.322029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.322319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.322326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.322616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.322623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.322809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.322816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.323019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.323025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.323186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.323193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.323505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.323512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.323814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.323821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.324132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.324140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.324458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.324465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.324774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.324780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.325112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.325119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.325452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.325460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.325751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.325757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.326050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.326056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.326379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.326386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.326684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.326692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.326983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.326989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.327285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.327291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.327468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.327476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.327812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.327819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.328124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.328131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.328352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.328359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.893 [2024-06-10 14:38:19.328547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.893 [2024-06-10 14:38:19.328554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.893 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.328904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.328911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.329199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.329206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.329495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.329502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.329875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.329882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.330188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.330196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.330486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.330493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.330697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.330704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.330981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.330987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.331297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.331304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.331690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.331697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.331989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.331996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.332308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.332317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.332640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.332647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.332985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.332992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.333313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.333322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.333627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.333634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.333895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.333902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.334212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.334218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.334409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.334416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.334732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.334739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.335040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.335046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.335339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.335346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.335677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.335683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.335990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.335996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.336298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.336304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.336598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.336605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.336910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.336917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.337226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.337233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.337560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.337568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.337859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.337866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.338160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.338166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.338475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.338483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.338648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.338656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.338962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.338968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.339181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.339188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.339495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.339501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.339824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.339831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.340024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.894 [2024-06-10 14:38:19.340032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.894 qpair failed and we were unable to recover it. 00:29:41.894 [2024-06-10 14:38:19.340341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.340349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.340666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.340672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.340981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.340987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.341319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.341325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.341487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.341494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.341883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.341890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.342060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.342067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.342348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.342354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.342679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.342685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.342994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.343001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.343271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.343277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.343467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.343474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.343765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.343771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.344057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.344063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.344373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.344380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.344670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.344677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.344982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.344989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.345304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.345311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.345678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.345684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.345973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.345980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.346273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.346280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.346489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.346495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.346836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.346842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.347171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.347178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.347493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.347499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.347717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.347723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.347930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.347937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.348226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.348233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.348427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.348433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.348777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.348783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.349096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.349104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.349426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.895 [2024-06-10 14:38:19.349433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.895 qpair failed and we were unable to recover it. 00:29:41.895 [2024-06-10 14:38:19.349737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.349744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.350052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.350061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.350368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.350375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.350690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.350696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.351006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.351013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.351322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.351329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.351637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.351643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.351942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.351948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.352148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.352154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.352491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.352499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.352784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.352791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.353080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.353087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.353389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.353395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.353620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.353627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.353930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.353937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.354265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.354272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.354578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.354585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.354902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.354910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.355219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.355225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.355426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.355432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.355672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.355678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.355984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.355990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.356297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.356304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.356600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.356607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.356765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.356773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.357058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.357065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.357363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.357370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.357685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.357691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.357996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.358003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.358287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.358299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.358601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.358608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.358810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.358816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.359141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.359148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.359479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.359487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.359803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.359809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.360100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.360106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.360405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.360411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.360588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.896 [2024-06-10 14:38:19.360595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.896 qpair failed and we were unable to recover it. 00:29:41.896 [2024-06-10 14:38:19.360822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.360829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.361130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.361136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.361452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.361459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.361791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.361798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.362126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.362133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.362470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.362476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.362768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.362775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.363085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.363091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.363382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.363388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.363781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.363787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.364074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.364081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.364443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.364449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.364688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.364695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.365001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.365008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.365297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.365304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.365593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.365601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.365909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.365916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.366209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.366216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.366539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.366547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.366863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.366871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.367056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.367063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.367338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.367346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.367542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.367549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.367767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.367775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.368085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.368092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.368388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.368395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.368735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.368742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.368916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.368924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.369214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.369221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.369529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.369537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.369831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.369838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.370188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.370195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.370476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.370484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.370774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.370781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.370969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.370976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.371296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.371304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.371585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.371592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.371772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.897 [2024-06-10 14:38:19.371779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.897 qpair failed and we were unable to recover it. 00:29:41.897 [2024-06-10 14:38:19.372119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.372126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.372428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.372436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.372751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.372758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.373056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.373063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.373351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.373359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.373673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.373681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.373995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.374002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.374328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.374336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.374624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.374632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.374815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.374822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.375137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.375145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.375493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.375500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.375798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.375806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.376148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.376155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.376328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.376335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.376638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.376645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.376927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.376933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.377255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.377262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.377570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.377577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.377787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.377793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.378012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.378018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.378297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.378303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.378611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.378618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.378915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.378923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.379253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.379260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.379621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.379628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.379950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.379957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.380260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.380267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.380629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.380636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.380941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.380948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.381261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.381268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.381595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.381603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.381810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.381817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.382112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.382119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.382431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.382438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.382739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.382745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.382970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.382977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.383264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.383271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.898 qpair failed and we were unable to recover it. 00:29:41.898 [2024-06-10 14:38:19.383565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.898 [2024-06-10 14:38:19.383572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.383902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.383908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.384237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.384243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.384553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.384559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.384873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.384879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.385037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.385044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.385406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.385412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.385691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.385699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.386014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.386020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.386319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.386326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.386666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.386672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.387076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.387082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.387386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.387393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.387698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.387705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.387897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.387905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.388214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.388221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.388538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.388545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.388840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.388847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.389135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.389142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.389469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.389475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.389787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.389793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.390123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.390129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.390454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.390460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.390795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.390801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.390988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.390994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.391320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.391326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.391728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.391735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.392074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.392080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.392414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.392421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.392714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.392720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.393017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.393025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.393337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.393343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.393514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.393521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.393808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.393815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.394145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.394152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.394467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.394474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.394783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.394789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.395079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.899 [2024-06-10 14:38:19.395086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.899 qpair failed and we were unable to recover it. 00:29:41.899 [2024-06-10 14:38:19.395387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.395393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.395710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.395717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.396023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.396030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.396320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.396327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.396623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.396630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.396925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.396932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.397229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.397236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.397479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.397486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.397832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.397839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.398050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.398058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.398374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.398380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.398668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.398675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.398866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.398873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.399143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.399149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.399466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.399472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.399785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.399791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.400153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.400160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.400475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.400482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.400804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.400810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.401141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.401148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.401476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.401483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.401768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.401775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.401975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.401981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.402263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.402269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.402593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.402600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.402786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.402793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.403107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.403114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.403427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.403434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.403738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.403744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.403941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.403947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.404276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.404282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.404574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.900 [2024-06-10 14:38:19.404580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-06-10 14:38:19.404904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.404911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.405301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.405307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.405595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.405603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.405800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.405807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.406095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.406101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.406431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.406438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.406738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.406745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.407074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.407081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.407261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.407269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.407506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.407514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.407900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.407907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.408230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.408237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.408538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.408546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.408733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.408739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.409061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.409067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.409398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.409404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.409730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.409737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.410050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.410059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.410339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.410346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.410564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.410571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.410906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.410912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.411225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.411232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.411447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.411454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.411767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.411773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.412107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.412113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.412421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.412428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.412621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.412627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.412833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.412840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.413145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.413151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.413430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.413436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.413740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.413747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.414041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.414048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.414338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.414345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.414533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.414541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.414856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.414863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.415203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.415210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.415530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.415537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-06-10 14:38:19.415849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.901 [2024-06-10 14:38:19.415856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.416039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.416046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.416216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.416223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.416589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.416596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.416919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.416925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.417226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.417234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.417534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.417541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.417845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.417852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.418185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.418192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.418498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.418505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.418794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.418801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.419103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.419110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.419424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.419431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.419750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.419756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.420045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.420051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.420274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.420280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.420572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.420578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.420732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.420739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.420970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.420977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.421305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.421312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.421607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.421621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.421886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.421892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.422192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.422198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.422485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.422492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.422785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.422793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.423067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.423074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.423365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.423371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.423692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.423698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.424018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.424025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.424337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.424343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.424433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.424440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.424725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.424732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.424923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.424930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.425105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.425111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.425393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.425400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.425685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.425692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.426014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.426021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.426351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.426358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-06-10 14:38:19.426670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.902 [2024-06-10 14:38:19.426676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.426968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.426974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.427278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.427284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.427620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.427628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.427935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.427941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.428239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.428246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.428597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.428604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.428885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.428892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.429204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.429210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.429573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.429580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.429879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.429885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.430195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.430201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.430455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.430462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.430762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.430768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.430937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.430944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.431313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.431321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.431669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.431676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.432013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.432020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.432300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.432306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.432586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.432594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.432910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.432917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.433216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.433223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.433530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.433539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.433845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.433851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.434110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.434117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.434306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.434313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.434632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.434638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.434946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.434952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.435272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.435278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.435442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.435449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.435828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.435834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.436159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.436166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.436477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.436484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.436771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.436777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.437091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.437097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.437407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.437414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.437738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.437745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.438073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.438079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.438369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.903 [2024-06-10 14:38:19.438376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.903 qpair failed and we were unable to recover it. 00:29:41.903 [2024-06-10 14:38:19.438696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.438702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.438988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.438996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.439289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.439296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.439599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.439606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.439928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.439936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.440245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.440252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.440567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.440574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.440870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.440876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.441212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.441219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.441546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.441552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.441847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.441853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.442160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.442167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.442447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.442454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.442785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.442792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.443057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.443064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.443360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.443367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.443777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.443784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.444081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.444088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.444378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.444385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.444689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.444695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.445019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.445025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.445310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.445326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.445660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.445667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.445956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.445964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.446276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.446282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.446489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.446496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.446871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.446877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.447177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.447183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.447498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.447506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.447815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.447822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.448134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.448141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.448452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.448458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.448767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.448773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.449077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.449084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.449388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.449394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.449720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.449726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.450036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.904 [2024-06-10 14:38:19.450042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.904 qpair failed and we were unable to recover it. 00:29:41.904 [2024-06-10 14:38:19.450211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.450218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.450530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.450536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.450749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.450756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.451026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.451033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.451342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.451348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.451652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.451659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.451946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.451952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.452344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.452351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.452556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.452562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:41.905 [2024-06-10 14:38:19.452844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.905 [2024-06-10 14:38:19.452851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:41.905 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.453148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.453155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.453476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.453484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.453764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.453771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.454061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.454067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.454359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.454366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.454699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.454706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.455019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.455025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.455323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.455330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.455622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.455628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.455938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.455945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.456251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.456257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.456571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.456579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.456877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.456883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.457210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.457217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.457519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.457526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.457826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.457833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.458136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.458144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.458443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.458450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.458774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.458781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.459072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.459078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.459448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.459454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.459742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.459748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.460072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.460078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.460377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.460384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.182 [2024-06-10 14:38:19.460712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.182 [2024-06-10 14:38:19.460718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.182 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.460905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.460913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.461266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.461273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.461509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.461516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.461825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.461832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.462156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.462163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.462505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.462512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.462771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.462777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.463085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.463092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.463419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.463427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.463735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.463741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.464042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.464048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.464349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.464356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.464663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.464670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.464985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.464991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.465277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.465284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.465584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.465591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.465904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.465911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.466217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.466224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.466536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.466543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.466838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.466845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.467158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.467164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.467473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.467480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.467800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.467807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.468018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.468024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.468304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.468311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.468642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.468649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.468980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.468987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.469320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.469327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.469630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.469637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.469912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.469919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.470236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.470244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.470521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.470532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.470859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.470866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.183 qpair failed and we were unable to recover it. 00:29:42.183 [2024-06-10 14:38:19.471175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.183 [2024-06-10 14:38:19.471182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.471482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.471489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.471783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.471789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.472099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.472105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.472427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.472434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.472740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.472747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.473040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.473047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.473407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.473414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.473693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.473700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.474031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.474037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.474342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.474349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.474634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.474640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.474944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.474951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.475238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.475245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.475519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.475526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.475845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.475852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.476161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.476168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.476361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.476368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.476678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.476685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.477002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.477008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.477197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.477204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.477624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.477631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.477921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.477929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.478239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.478245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.478513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.478521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.478871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.478877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.479177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.479183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.479460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.479467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.479630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.479637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.479971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.479979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.480271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.480278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.480656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.480663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.480969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.480975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.481249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.184 [2024-06-10 14:38:19.481256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.184 qpair failed and we were unable to recover it. 00:29:42.184 [2024-06-10 14:38:19.481580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.481586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.481755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.481762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.482114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.482121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.482409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.482416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.482732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.482741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.483049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.483055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.483381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.483388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.483705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.483712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.484004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.484010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.484319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.484326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.484644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.484650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.484942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.484948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.485242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.485249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.485555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.485562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.485884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.485891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.486221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.486229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.486546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.486554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.486872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.486879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.487074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.487081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.487383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.487390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.487707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.487713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.487905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.487912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.488230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.488236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.488539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.488546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.488825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.488832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.489160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.489166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.489461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.489467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.489773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.489779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.489981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.489988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.490232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.490244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.490461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.490469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.490779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.490787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.491097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.491104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.185 [2024-06-10 14:38:19.491403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.185 [2024-06-10 14:38:19.491410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.185 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.491669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.491676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.492012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.492019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.492224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.492230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.492411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.492418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.492758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.492765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.493057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.493064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.493437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.493444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.493736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.493743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.494085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.494091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.494385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.494392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.494716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.494722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.495023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.495030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.495363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.495369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.495677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.495684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.495993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.496000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.496360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.496368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.496686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.496692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.497003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.497009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.497187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.497194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.497499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.497505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.497832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.497838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.498148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.498154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.498465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.498472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.498846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.498852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.499172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.499179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.499490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.499499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.499824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.499832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.500152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.500161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.500464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.500471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.500789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.500797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.501085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.501094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.501389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.501397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.501707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.501713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.186 qpair failed and we were unable to recover it. 00:29:42.186 [2024-06-10 14:38:19.501984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.186 [2024-06-10 14:38:19.501991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.502323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.502330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.502627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.502635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.502953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.502960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.503268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.503277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.503588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.503595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.503797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.503803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.503983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.503990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.504259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.504266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.504615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.504622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.504931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.504939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.505273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.505280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.505562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.505569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.505908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.505915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.506199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.506206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.506397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.506405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.506751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.506759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.507072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.507078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.507384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.507391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.507696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.507703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.508006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.508013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.508301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.508308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.508524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.508531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.508856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.508863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.509250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.509256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.509604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.509610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.509977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.509984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.187 qpair failed and we were unable to recover it. 00:29:42.187 [2024-06-10 14:38:19.510294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.187 [2024-06-10 14:38:19.510301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.510609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.510616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.510795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.510802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.511138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.511146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.511458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.511465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.511770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.511778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.512088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.512095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.512333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.512340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.512663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.512670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.512966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.512974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.513267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.513274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.513594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.513601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.513794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.513800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.514148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.514155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.514437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.514444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.514722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.514729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.515048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.515055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.515243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.515252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.515561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.515568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.515856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.515863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.516173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.516180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.516489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.516496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.516879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.516885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.517195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.517202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.517515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.517523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.517829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.517836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.518142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.518150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.518455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.518462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.518845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.518852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.519142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.519149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.519461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.519468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.519644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.519651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.519972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.520279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.188 [2024-06-10 14:38:19.520286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.188 qpair failed and we were unable to recover it. 00:29:42.188 [2024-06-10 14:38:19.520485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.520492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.520707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.520714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.520992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.520999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.521335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.521342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.521627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.521634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.521952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.521959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.522249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.522255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.522553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.522559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.522886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.522893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.523234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.523241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.523467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.523473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.523686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.523692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.523976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.523984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.524296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.524304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.524601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.524609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.524948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.524956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.525231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.525239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.525535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.525542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.525856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.525864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.526204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.526211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.526512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.526519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.526814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.526822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.527153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.527161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.527480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.527490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.527675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.527683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.527997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.528004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.528307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.528320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.528433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.528439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.528749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.528756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.529072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.529079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.529416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.529423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.529789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.529795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.530080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.530086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.530385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.189 [2024-06-10 14:38:19.530392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.189 qpair failed and we were unable to recover it. 00:29:42.189 [2024-06-10 14:38:19.530792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.530799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.531121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.531127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.531300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.531308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.531641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.531647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.531957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.531963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.532275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.532281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.532593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.532600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.532914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.532921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.533228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.533235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.533545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.533552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.533872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.533878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.534195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.534201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.534516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.534522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.534809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.534816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.535156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.535165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.535478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.535485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.535812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.535819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.536120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.536127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.536427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.536433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.536650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.536656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.536842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.536849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.537035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.537042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.537237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.537243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.537553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.537560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.537859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.537867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.538186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.538192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.538494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.538501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.538822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.538829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.539113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.539119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.539436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.539446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.539632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.539639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.539793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.539800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.539992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.539998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.190 [2024-06-10 14:38:19.540236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.190 [2024-06-10 14:38:19.540242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.190 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.540629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.540635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.540919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.540926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.541214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.541220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.541508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.541515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.541839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.541846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.542163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.542171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.542331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.542339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.542643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.542649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.542846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.542853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.543184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.543191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.543481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.543489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.543705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.543711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.543898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.543904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.544212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.544218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.544498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.544504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.544823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.544830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.545142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.545149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.545451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.545458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.545744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.545750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.546060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.546067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.546249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.546256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.546531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.546538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.546709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.546716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.547048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.547055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.547331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.547337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.547639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.547647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.547957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.547964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.548267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.548274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.548470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.548476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.548739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.548746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.549053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.549060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.549371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.549377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.549709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.549715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.191 qpair failed and we were unable to recover it. 00:29:42.191 [2024-06-10 14:38:19.550006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.191 [2024-06-10 14:38:19.550012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.550325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.550332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.550791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.550799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.551088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.551096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.551303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.551309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.551649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.551656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.551967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.551975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.552172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.552180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.552452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.552459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.552779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.552785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.553085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.553091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.553414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.553421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.553713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.553719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.554056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.554062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.554375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.554383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.554651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.554659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.554968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.554975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.555284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.555291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.555606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.555613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.555904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.555911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.556223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.556230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.556540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.556847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.556854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.557138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.557145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.557446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.557453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.557762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.557769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.558075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.558082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.558389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.558395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.558704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.192 [2024-06-10 14:38:19.558710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.192 qpair failed and we were unable to recover it. 00:29:42.192 [2024-06-10 14:38:19.559016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.559023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.559333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.559340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.559635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.559641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.559949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.559955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.560261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.560267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.560577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.560584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.560892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.560898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.561058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.561065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.561374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.561381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.561682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.561689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.561997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.562003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.562313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.562324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.562634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.562640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.562829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.562837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.563063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.563070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.563282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.563288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.563568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.563575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.563882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.563888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.564205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.564211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.564539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.564547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.564837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.564844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.565055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.565062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.565230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.565236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.565568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.565575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.565926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.565932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.566244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.566251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.566568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.566574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.566766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.566773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.567073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.567080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.567411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.567418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.567714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.567720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.567928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.567934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.568256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.568263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.193 [2024-06-10 14:38:19.568571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.193 [2024-06-10 14:38:19.568577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.193 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.568891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.568898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.569210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.569217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.569526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.569532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.569805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.569811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.570138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.570145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.570301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.570308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.570617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.570625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.570931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.570939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.571249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.571257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.571569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.571576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.571856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.571863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.572162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.572169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.572474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.572481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.572764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.572770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.573072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.573079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.573382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.573389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.573699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.573706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.573993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.574000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.574290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.574296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.574615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.574624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.574819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.574826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.575013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.575019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.575329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.575336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.575616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.575623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.575941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.575948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.576257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.576263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.576427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.576434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.576717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.576724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.577048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.577055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.577362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.577369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.577675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.577682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.577889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.577896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.578157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.578163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.194 qpair failed and we were unable to recover it. 00:29:42.194 [2024-06-10 14:38:19.578481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.194 [2024-06-10 14:38:19.578488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.578662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.578670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.578969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.578976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.579274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.579281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.579474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.579481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.579810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.580132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.580139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.580472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.580479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.580777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.580784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.581083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.581090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.581281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.581288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.581615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.581622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.581955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.581962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.582295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.582301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.582606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.582613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.582917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.582924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.583231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.583238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.583596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.583602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.583929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.583936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.584245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.584251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.584524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.584531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.584864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.584870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.585197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.585203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.585497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.585504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.585830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.585836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.586124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.586131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.586441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.586449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.586756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.586763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.587116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.587122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.587451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.587458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.587740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.587747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.588033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.588039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.588350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.588357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.588682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.588688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.195 qpair failed and we were unable to recover it. 00:29:42.195 [2024-06-10 14:38:19.589005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.195 [2024-06-10 14:38:19.589012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.589322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.589328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.589693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.589700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.590002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.590008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.590326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.590332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.590624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.590630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.590938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.590945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.591274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.591280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.591455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.591462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.591740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.591746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.591973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.591980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.592354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.592361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.592668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.592675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.592856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.592862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.593033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.593039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.593312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.593322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.593524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.593530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.593816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.593822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.594106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.594112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.594457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.594464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.594787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.594794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.594971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.594979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.595313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.595323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.595613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.595620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.595762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.595769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.596101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.596107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.596393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.596399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.596716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.596722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.597015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.597022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.597313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.597322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.597696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.597702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.598004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.598010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.598306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.598320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.598601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.196 [2024-06-10 14:38:19.598608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.196 qpair failed and we were unable to recover it. 00:29:42.196 [2024-06-10 14:38:19.598926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.598932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.599219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.599226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.599424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.599431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.599765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.599772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.600066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.600073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.600381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.600387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.600710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.600716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.601025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.601031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.601345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.601352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.601646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.601652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.601950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.601956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.602267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.602273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.602573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.602581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.602771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.602778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.603067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.603074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.603377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.603383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.603664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.603670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.603996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.604003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.604176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.604183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.604532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.604538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.604831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.604838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.605149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.605155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.605462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.605468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.605635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.605642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.605794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.605802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.606108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.606114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.606447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.606453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.606765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.606771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.197 [2024-06-10 14:38:19.607063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.197 [2024-06-10 14:38:19.607070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.197 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.607416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.607423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.607717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.607724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.608043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.608049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.608334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.608340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.608613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.608619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.608911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.608918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.609117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.609123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.609403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.609410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.609698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.609704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.610013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.610021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.610336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.610343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.610613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.610619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.610911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.610918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.611214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.611221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.611434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.611442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.611712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.611719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.612030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.612037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.612360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.612367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.612647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.612654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.612977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.612984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.613293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.613299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.613640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.613646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.613932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.613939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.614252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.614258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.614622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.614628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.614947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.614954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.615115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.615123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.615408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.615415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.615588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.615595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.615884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.615891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.616176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.616183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.616513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.616521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.616829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.198 [2024-06-10 14:38:19.616836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.198 qpair failed and we were unable to recover it. 00:29:42.198 [2024-06-10 14:38:19.617134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.617141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.617450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.617457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.617774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.617781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.618101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.618107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.618395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.618402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.618724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.618730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.618943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.618950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.619260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.619267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.619579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.619587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.619897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.619904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.620206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.620213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.620489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.620496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.620792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.620800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.620989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.620995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.621174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.621182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.621484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.621490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.621696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.621704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.621829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.621836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.622011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.622018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.622324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.622331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.622644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.622650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.622962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.622968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.623249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.623255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.623588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.623595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.623966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.623972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.624280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.624287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.624594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.624601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.624886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.624893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.625180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.625186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.625476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.625483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.625691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.625704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.626001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.626008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.626296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.626303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.626510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.199 [2024-06-10 14:38:19.626517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.199 qpair failed and we were unable to recover it. 00:29:42.199 [2024-06-10 14:38:19.626827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.626834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.627140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.627146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.627453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.627460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.627742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.627749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.628059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.628066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.628374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.628381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.628675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.628683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.628988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.628996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.629304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.629311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.629627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.629634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.629925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.629932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.630246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.630253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.630563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.630570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.630833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.630840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.631057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.631064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.631354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.631360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.631667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.631673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.631982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.631989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.632319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.632326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.632638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.632645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.632952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.632959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.633147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.633154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.633439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.633448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.633638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.633644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.633836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.633843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.634170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.634176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.634476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.634484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.634669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.634676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.634969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.634976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.635298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.635305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.635674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.635681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.635998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.636005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.636331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.636338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.636662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.200 [2024-06-10 14:38:19.636668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.200 qpair failed and we were unable to recover it. 00:29:42.200 [2024-06-10 14:38:19.636966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.636973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.637321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.637328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.637620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.637627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.637932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.637939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.638155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.638161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.638465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.638472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.638793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.638799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.639086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.639098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.639429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.639435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.639643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.639650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.639976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.639982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.640283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.640290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.640523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.640530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.640822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.640828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.641149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.641156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.641468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.641476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.641816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.641823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.642127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.642133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.642443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.642450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.642779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.642785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.643113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.643120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.643426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.643433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.643736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.643743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.644052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.644059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.644385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.644392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.644706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.644712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.645010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.645018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.645326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.645333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.645703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.645709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.646008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.646014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.646184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.646191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.646526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.646533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.646725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.646731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.201 [2024-06-10 14:38:19.647010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.201 [2024-06-10 14:38:19.647016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.201 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.647325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.647331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.647531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.647538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.647868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.647875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.648162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.648169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.648493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.648500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.648841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.648848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.649141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.649148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.649464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.649470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.649786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.649793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.650108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.650115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.650413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.650420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.650748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.650754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.651045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.651053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.651370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.651376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.651736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.651743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.652049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.652055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.652365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.652372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.652667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.652674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.652863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.652870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.653176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.653182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.653485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.653491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.653857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.653865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.654158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.654165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.654472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.654479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.654657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.654663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.655028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.655034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.655420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.655427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.655709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.202 [2024-06-10 14:38:19.655716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.202 qpair failed and we were unable to recover it. 00:29:42.202 [2024-06-10 14:38:19.656037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.656044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.656335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.656342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.656572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.656578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.656880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.656887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.657211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.657218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.657540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.657547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.657841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.657847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.658163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.658170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.658478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.658485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.658798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.658805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.659094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.659100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.659378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.659385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.659669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.659675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.659886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.659893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.660221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.660227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.660545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.660552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.660900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.660906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.661202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.661209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.661485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.661492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.661786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.661799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.662100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.662106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.662326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.662332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.662623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.662629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.662948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.662955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.663266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.663272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.663656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.663662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.663972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.663979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.664294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.664300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.664579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.664586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.664868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.664875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.665205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.665211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.665503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.665509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.665797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.665804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.666119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.203 [2024-06-10 14:38:19.666127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.203 qpair failed and we were unable to recover it. 00:29:42.203 [2024-06-10 14:38:19.666458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.666465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.666658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.666665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.666937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.666944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.667126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.667132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.667455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.667462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.667673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.667680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.667954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.667960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.668210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.668216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.668423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.668430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.668744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.668751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.669109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.669115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.669410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.669417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.669622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.669629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.669938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.669945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.670227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.670242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.670593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.670600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.670888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.670895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.671212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.671218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.671539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.671546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.671858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.671864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.672156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.672162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.672481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.672488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.672762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.672768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.673087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.673094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.673186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.673192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.673452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.673458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.673732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.673738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.674095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.674102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.674378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.674385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.674715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.674721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.675021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.675027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.675339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.675346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.675662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.675669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.204 [2024-06-10 14:38:19.675953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.204 [2024-06-10 14:38:19.675960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.204 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.676120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.676128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.676554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.676560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.676851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.676859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.677190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.677197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.677485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.677492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.677747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.677755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.678047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.678054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.678389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.678395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.678742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.678749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.679054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.679060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.679371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.679377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.679678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.679685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.679982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.679994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.680296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.680303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.680463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.680471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.680745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.680752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.680957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.680964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.681273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.681280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.681625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.681631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.681821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.681828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.682172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.682179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.682487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.682494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.682791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.682798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.683128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.683135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.683445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.683453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.683812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.683819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.684110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.684117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.684411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.684418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.684729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.684735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.685042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.685048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.685255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.685262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.685597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.685604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.685912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.685918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.205 qpair failed and we were unable to recover it. 00:29:42.205 [2024-06-10 14:38:19.686080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.205 [2024-06-10 14:38:19.686087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.686266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.686272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.686596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.686603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.686786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.686793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.687131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.687137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.687544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.687550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.687859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.687866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.688165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.688172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.688485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.688492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.688762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.688770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.689079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.689086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.689394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.689401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.689710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.689718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.689871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.689879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.690149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.690156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.690466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.690473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.690779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.690786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.691094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.691101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.691385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.691392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.691695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.691701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.692017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.692024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.692216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.692222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.692565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.692572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.692894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.692900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.693209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.693216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.693487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.693493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.693794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.693800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.694117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.694123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.694429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.694436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.694751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.694757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.695089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.695096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.695304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.695311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.695622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.695629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.695937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.206 [2024-06-10 14:38:19.695943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.206 qpair failed and we were unable to recover it. 00:29:42.206 [2024-06-10 14:38:19.696236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.696243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.696567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.696574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.696900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.696907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.697162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.697170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.697387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.697394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.697725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.697731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.698045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.698052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.698239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.698246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.698526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.698532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.698847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.698854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.699152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.699167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.699470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.699477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.699686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.699692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.700020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.700026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.700342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.700350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.700657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.700663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.700993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.700999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.701329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.701335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.701631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.701639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.701944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.701950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.702242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.702249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.702468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.702475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.702804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.702811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.703134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.703141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.703436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.703443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.703761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.703767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.704056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.704062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.704382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.704389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.704710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.704717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.705024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.705030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.705321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.705327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.207 qpair failed and we were unable to recover it. 00:29:42.207 [2024-06-10 14:38:19.705651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.207 [2024-06-10 14:38:19.705658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.705944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.705951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.706263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.706270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.706577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.706585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.706779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.706786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.707095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.707102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.707389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.707396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.707735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.707742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.708057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.708064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.708251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.708258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.708587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.708593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.708903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.708909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.709249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.709256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.709569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.709577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.709891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.709898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.710207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.710214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.710535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.710542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.710720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.710728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.710910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.710918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.711235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.711242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.711548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.711555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.711850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.711858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.712172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.712179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.712485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.712491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.712764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.712771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.713069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.713075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.713366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.713372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.713662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.713669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.713942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.713949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.714250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.714256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.714480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.714487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.208 qpair failed and we were unable to recover it. 00:29:42.208 [2024-06-10 14:38:19.714795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.208 [2024-06-10 14:38:19.714801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.715161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.715167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.715508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.715514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.715717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.715724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.716026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.716033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.716187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.716194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.716473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.716481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.716789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.716796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.716994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.717001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.717323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.717330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.717645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.717653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.717961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.717967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.718312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.718326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.718638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.718645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.718997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.719004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.719322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.719329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.719636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.719642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.719972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.719978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.720312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.720322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.720608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.720615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.720956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.720962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.721189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.721195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.721485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.721492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.721824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.721830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.722159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.722165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.722334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.722341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.722539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.722546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.722938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.722944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.723251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.723258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.723565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.723572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.723761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.723768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.724114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.724122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.724428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.724435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.724776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.209 [2024-06-10 14:38:19.724784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.209 qpair failed and we were unable to recover it. 00:29:42.209 [2024-06-10 14:38:19.725096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.725103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.725393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.725400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.725602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.725610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.725929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.725935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.726214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.726220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.726431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.726446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.726768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.726774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.727085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.727091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.727420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.727426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.727628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.727634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.727985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.727991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.728324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.728331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.728560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.728567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.728896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.728902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.729081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.729088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.729404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.729411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.729792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.729798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.730098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.730105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.730428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.730435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.730727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.730734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.731069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.731075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.731360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.731367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.731603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.731610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.731928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.731934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.732222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.732236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.732541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.732548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.732878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.732885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.733189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.733197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.733505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.733511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.733812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.733819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.734129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.734136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.734445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.734452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.734781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.734788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.210 qpair failed and we were unable to recover it. 00:29:42.210 [2024-06-10 14:38:19.735098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.210 [2024-06-10 14:38:19.735113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.735292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.735298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.735622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.735629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.735980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.735986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.736292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.736299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.736601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.736608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.736998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.737004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.737322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.737329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.737660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.737667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.737964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.737972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.738271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.738278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.738607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.738614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.738927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.738935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.739241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.739247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.739601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.739607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.739907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.739913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.740219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.740226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.740584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.740592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.740897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.740904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.741183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.741189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.741491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.741498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.741790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.741796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.742108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.742115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.742406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.742413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.742737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.742743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.743050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.743056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.743324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.743331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.743662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.743668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.743941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.743947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.744233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.744239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.744537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.744544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.744840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.744846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.745158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.211 [2024-06-10 14:38:19.745165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.211 qpair failed and we were unable to recover it. 00:29:42.211 [2024-06-10 14:38:19.745570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.745578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.745884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.745890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.746078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.746085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.746284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.746291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.746596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.746602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.746926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.746933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.747268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.747275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.747630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.747638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.747946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.747952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.748273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.748280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.748620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.748627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.748911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.748918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.749228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.749234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.749558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.749565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.749892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.749899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.750183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.750189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.750483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.750491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.750786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.750801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.751100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.751106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.751496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.751504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.751817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.751824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.752133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.752139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.752457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.752464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.752745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.752752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.753075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.753081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.753257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.753265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.753467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.753474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.753780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.753786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.754100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.754106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.754418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.754425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.754753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.754760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.755062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.755069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.755381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.755388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.755554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.212 [2024-06-10 14:38:19.755561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.212 qpair failed and we were unable to recover it. 00:29:42.212 [2024-06-10 14:38:19.755837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.213 [2024-06-10 14:38:19.755851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.213 qpair failed and we were unable to recover it. 00:29:42.213 [2024-06-10 14:38:19.756158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.213 [2024-06-10 14:38:19.756165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.213 qpair failed and we were unable to recover it. 00:29:42.213 [2024-06-10 14:38:19.756469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.213 [2024-06-10 14:38:19.756476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.213 qpair failed and we were unable to recover it. 00:29:42.213 [2024-06-10 14:38:19.756778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.213 [2024-06-10 14:38:19.756785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.213 qpair failed and we were unable to recover it. 00:29:42.213 [2024-06-10 14:38:19.757085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.757092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.757387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.757393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.757776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.757782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.758002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.758008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.758340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.758347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.758664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.758671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.758987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.758994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.759323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.759330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.759479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.759486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.759950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.759956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.760250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.760258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.214 [2024-06-10 14:38:19.760570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.214 [2024-06-10 14:38:19.760578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.214 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.760874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.760883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.761761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.761781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.762068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.762077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.762271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.762278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.762592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.762599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.762909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.762916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.763227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.763236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.493 [2024-06-10 14:38:19.763546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.493 [2024-06-10 14:38:19.763554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.493 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.763839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.763846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.764126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.764134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.764309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.764320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.764647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.764655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.764947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.764953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.765113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.765122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.765347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.765355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.765674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.765681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.765983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.765989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.766320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.766327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.766637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.766643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.766847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.766853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.767155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.767161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.767486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.767494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.767768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.767775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.768096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.768105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.768395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.768403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.768710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.768717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.769022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.769028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.769320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.769328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.769684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.769691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.770012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.770019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.770330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.770337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.770650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.770657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.771029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.771035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.771387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.771395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.771700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.771707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.771992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.771998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.772286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.772293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.772609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.772616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.772922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.772929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.773234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.773241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.773554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.494 [2024-06-10 14:38:19.773561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.494 qpair failed and we were unable to recover it. 00:29:42.494 [2024-06-10 14:38:19.773870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.773877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.774261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.774268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.774559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.774566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.774884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.774890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.775235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.775243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.775592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.775601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.775901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.775908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.776207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.776214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.776514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.776521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.776838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.776845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.776991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.776998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.777272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.777279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.777624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.777894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.777902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.778200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.778209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.778562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.778570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.778892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.778899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.779278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.779285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.779581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.779589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.779889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.779896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.780174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.780182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.780395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.780404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.780716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.780723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.781106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.781113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.781387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.781394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.781716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.781723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.782102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.782109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.782421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.782428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.782759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.782766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.783072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.783079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.783388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.783395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.783697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.783704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.784013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.784020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.784328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.495 [2024-06-10 14:38:19.784336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.495 qpair failed and we were unable to recover it. 00:29:42.495 [2024-06-10 14:38:19.784652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.784659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.784948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.784954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.785296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.785303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.785609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.785616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.785890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.785898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.786251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.786259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.786620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.786627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.786934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.786942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.787136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.787144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.787459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.787466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.787762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.787770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.788080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.788092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.788423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.788431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.788755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.788763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.789058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.789066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.789385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.789393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.789691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.789698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.790010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.790017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.790324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.790331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.790556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.790563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.790874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.790881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.791208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.791215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.791568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.791576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.791845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.791851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.792051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.792058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.792358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.792365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.792666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.792673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.793024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.793031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.793334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.793341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.793650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.793657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.793946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.793953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.794265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.794272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.794566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.794574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.496 [2024-06-10 14:38:19.794870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.496 [2024-06-10 14:38:19.794877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.496 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.795177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.795184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.795519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.795527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.795849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.795856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.796156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.796163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.796483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.796490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.796787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.796794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.797146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.797154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.797435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.797442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.797827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.797834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.798115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.798122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.798277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.798284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.798480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.798487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.798820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.798827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.799137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.799145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.799430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.799438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.799742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.799749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.800074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.800082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.800419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.800426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.800717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.800724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.801103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.801110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.801385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.801392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.801728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.801734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.802008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.802015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.802320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.802328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.802685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.802692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.802999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.803006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.803166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.803174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.803512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.803519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.803909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.803916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.804163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.804170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.804482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.804490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.804696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.804703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.805062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.805069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.497 [2024-06-10 14:38:19.805448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.497 [2024-06-10 14:38:19.805455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.497 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.805767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.805774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.806063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.806069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.806255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.806262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.806561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.806569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.806874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.806882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.807171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.807179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.807553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.807561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.807641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.807648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.807937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.807944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.808251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.808259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.808576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.808586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.808875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.808881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.809187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.809194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.809399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.809406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.809584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.809590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.809885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.809892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.810253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.810260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.810576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.810582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.810874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.810882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.811215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.811223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.811380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.811387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.811684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.811691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.811976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.811983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.812191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.812198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.812485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.812492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.812816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.812822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.813017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.498 [2024-06-10 14:38:19.813023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.498 qpair failed and we were unable to recover it. 00:29:42.498 [2024-06-10 14:38:19.813375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.813382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.813577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.813585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.813844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.813852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.814156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.814162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.814474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.814481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.814798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.814804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.815075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.815081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.815262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.815269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.815568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.815575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.815904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.815911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.816223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.816231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.816541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.816549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.816849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.816856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.817166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.817173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.817251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.817258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.817460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.817468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.817794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.817803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.818133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.818142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.818448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.818456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.818633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.818640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.818981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.818987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.819289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.819295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.819578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.819585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.819917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.819926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.820222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.820228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.820542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.820549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.820909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.820916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.821223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.821230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.821480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.821487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.821658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.821665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.821975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.821981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.822284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.822291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.822594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.822602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.499 qpair failed and we were unable to recover it. 00:29:42.499 [2024-06-10 14:38:19.822807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.499 [2024-06-10 14:38:19.822815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.823130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.823137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.823343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.823352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.823647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.823654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.823964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.823970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.824253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.824260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.824577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.824584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.824913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.824919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.825214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.825220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.825528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.825535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.825850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.825857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.826193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.826199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.826570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.826577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.826878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.826884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.827207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.827213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.827509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.827516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.827844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.827851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.828143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.828151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.828461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.828468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.828765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.828772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.829137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.829144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.829434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.829440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.829747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.829754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.830036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.830043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.830376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.830385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.830689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.830696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.831026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.831032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.831326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.831333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.831622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.831629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.831814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.831821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.832148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.832157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.832480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.832487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.832790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.832797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.833008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.500 [2024-06-10 14:38:19.833014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.500 qpair failed and we were unable to recover it. 00:29:42.500 [2024-06-10 14:38:19.833355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.833362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.833679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.833686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.833991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.833998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.834312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.834325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.834624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.834631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.834919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.834926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.835208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.835215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.835598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.835606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.835897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.835905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.836059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.836066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.836238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.836245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.836579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.836586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.836905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.836911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.837221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.837228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.837388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.837395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.837728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.837735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.838045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.838052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.838345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.838352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.838669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.838677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.838988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.838994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.839290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.839297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.839613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.839621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.840006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.840012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.840192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.840199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.840355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.840363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.840653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.840660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.840969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.840976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.841192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.841201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.841504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.841511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.841698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.841705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.842054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.842062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.842359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.842367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.842689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.501 [2024-06-10 14:38:19.842697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.501 qpair failed and we were unable to recover it. 00:29:42.501 [2024-06-10 14:38:19.842985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.842993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.843333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.843341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.843621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.843627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.843943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.843951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.844277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.844284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.844487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.844494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.844853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.844860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.845200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.845206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.845437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.845444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.845773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.845782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.846086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.846093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.846398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.846405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.846771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.846778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.847082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.847090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.847401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.847409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.847602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.847609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.847914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.847921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.848231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.848238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.848551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.848862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.848869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.849052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.849060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.849338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.849345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.849638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.849645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.849863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.849870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.850101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.850108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.850422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.850428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.850744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.850751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.851061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.851067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.851446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.851454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.851721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.851728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.852037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.852044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.852360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.852368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.852690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.852696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.502 [2024-06-10 14:38:19.853002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.502 [2024-06-10 14:38:19.853010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.502 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.853295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.853308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.853538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.853546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.853853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.853861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.854242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.854250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.854425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.854433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.854836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.854843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.855138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.855145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.855480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.855487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.855799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.855806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.856144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.856153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.856460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.856467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.856800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.856807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.856982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.856989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.857337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.857344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.857670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.857677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.857970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.857977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.858328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.858336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.858642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.858649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.858889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.858896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.859195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.859202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.859513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.859520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.859849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.859857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.860166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.860173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.860529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.860537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.860737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.860744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.861040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.861046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.861353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.861360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.861674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.861681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.861987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.861994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.862399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.862406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.862701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.862708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.863038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.863044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.863341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.503 [2024-06-10 14:38:19.863348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.503 qpair failed and we were unable to recover it. 00:29:42.503 [2024-06-10 14:38:19.863554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.863561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.863631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.863638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.863974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.863981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.864286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.864292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.864597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.864604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.864923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.864930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.865128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.865134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.865486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.865814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.865821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.866264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.866272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.866454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.866462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.866746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.866753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.866971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.866978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.867181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.867189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.867501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.867508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.867883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.867889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.868191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.868200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.868501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.868508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.868855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.868861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.869155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.869162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.869451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.869458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.869668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.869675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.869964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.869972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.870297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.870304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.870592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.870599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.870929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.870935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.871140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.871146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.504 [2024-06-10 14:38:19.871512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.504 [2024-06-10 14:38:19.871519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.504 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.871807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.871813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.872031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.872038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.872359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.872366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.872694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.872701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.873000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.873007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.873350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.873357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.873666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.873673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.873964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.873971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.874266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.874273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.874555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.874562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.874764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.874770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.875102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.875109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.875423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.875430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.875754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.875760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.876136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.876143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.876435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.876442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.876760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.876767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.877105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.877112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.877390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.877397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.877686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.877694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.877994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.878001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.878322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.878330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.878639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.878646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.878958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.878966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.879274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.879280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.879575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.879583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.879743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.879750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.880079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.880086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.880388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.880397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.880689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.880695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.881000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.881006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.881166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.881173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.881497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.881504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.505 qpair failed and we were unable to recover it. 00:29:42.505 [2024-06-10 14:38:19.881667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.505 [2024-06-10 14:38:19.881674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.881952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.881959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.882281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.882287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.882592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.882599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.882907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.882913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.883289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.883296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.883592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.883599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.883893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.883900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.884201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.884208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.884506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.884514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.884832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.884839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.885030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.885037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.885350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.885357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.885686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.885692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.886003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.886011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.886324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.886331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.886626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.886634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.886935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.886942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.887106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.887113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.887433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.887439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.887819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.887826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.888108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.888114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.888435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.888442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.888749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.888757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.889065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.889072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.889373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.889380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.889677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.889684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.889964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.889971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.890262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.890268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.890590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.890598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.890913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.890919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.891220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.891226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.891608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.891615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.891821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.891827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.506 qpair failed and we were unable to recover it. 00:29:42.506 [2024-06-10 14:38:19.892153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.506 [2024-06-10 14:38:19.892160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.892449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.892458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.892764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.892771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.892929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.892937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.893305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.893313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.893640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.893647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.894035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.894042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.894342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.894349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.894664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.894671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.894958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.894965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.895268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.895275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.895612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.895620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.895925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.895931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.896237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.896244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.896529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.896536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.896834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.896841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.897151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.897158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.897306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.897320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.897548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.897555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.897881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.897887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.898196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.898203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.898538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.898544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.898847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.898854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.899182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.899189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.899483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.899490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.899807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.899813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.900174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.900180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.900360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.900367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.900691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.900698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.900986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.900993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.901152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.901160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.901349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.901356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.901588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.901595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.901903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.507 [2024-06-10 14:38:19.901910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.507 qpair failed and we were unable to recover it. 00:29:42.507 [2024-06-10 14:38:19.902218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.902225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.902541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.902548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.902844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.902850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.903160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.903169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.903484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.903491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.903819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.903826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.904111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.904117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.904277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.904285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.904640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.904646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.904807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.904814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.905077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.905084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.905463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.905471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.905753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.905759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.906088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.906095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.906384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.906391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.906711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.906718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.906912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.907142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.907148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.907482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.907490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.907812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.907819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.908140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.908146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.908470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.908476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.908808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.908814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.909104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.909111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.909284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.909290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.909692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.909700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.910020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.910026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.910319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.910327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.910642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.910649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.910858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.910865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.911205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.911213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.911494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.911501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.911790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.911797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.508 [2024-06-10 14:38:19.912162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.508 [2024-06-10 14:38:19.912169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.508 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.912468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.912475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.912778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.912784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.913093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.913099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.913391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.913398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.913607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.913615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.913880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.913887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.914198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.914205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.914538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.914544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.914837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.914844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.915158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.915164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.915480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.915487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.915791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.915798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.916113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.916120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.916429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.916437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.916741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.916748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.917052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.917059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.917371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.917378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.917709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.917716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.918002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.918015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.918294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.918300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.918662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.918669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.918972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.918978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.919298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.919304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.919635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.919643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.919829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.509 [2024-06-10 14:38:19.919836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.509 qpair failed and we were unable to recover it. 00:29:42.509 [2024-06-10 14:38:19.920163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.920171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.920349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.920356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.920678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.920684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.920991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.920998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.921341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.921348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.921653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.921667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.922000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.922006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.922318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.922327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.922638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.922644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.922927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.922942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.923239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.923246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.923604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.923612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.923800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.923807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.924124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.924131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.924430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.924437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.924758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.924765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.925073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.925080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.925371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.925378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.925622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.925628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.925949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.925955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.926131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.926139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.926378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.926385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.926711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.926717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.927027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.927034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.927341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.927348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.927631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.927637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.927944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.927951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.928260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.928267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.928587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.928596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.928880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.928888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.929174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.929181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.929488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.929495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.929796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.929803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.930149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.510 [2024-06-10 14:38:19.930155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.510 qpair failed and we were unable to recover it. 00:29:42.510 [2024-06-10 14:38:19.930494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.930501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.930803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.930810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.931092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.931099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.931299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.931306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.931611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.931617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.931836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.931843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.932140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.932146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.932425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.932432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.932742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.932750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.933063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.933071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.933382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.933388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.933676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.933688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.934014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.934020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.934227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.934234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.934552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.934559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.934748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.934755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.935070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.935077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.935340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.935347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.935658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.935664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.935975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.935981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.936178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.936184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.936400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.936409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.936723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.936730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.937020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.937330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.937337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.937638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.937644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.937959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.937965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.938262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.938269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.938483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.938490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.938836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.938842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3222522 Killed "${NVMF_APP[@]}" "$@" 00:29:42.511 [2024-06-10 14:38:19.939051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.939059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.939267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.939274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 [2024-06-10 14:38:19.939587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.511 [2024-06-10 14:38:19.939594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.511 qpair failed and we were unable to recover it. 00:29:42.511 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:42.511 [2024-06-10 14:38:19.939869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.939876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.939989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.939996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:42.512 [2024-06-10 14:38:19.940270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.940277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:42.512 [2024-06-10 14:38:19.940485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.940492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:42.512 [2024-06-10 14:38:19.940817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.940824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.512 [2024-06-10 14:38:19.941213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.941221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.941514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.941520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.941833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.941840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.942146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.942153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.942460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.942466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.942773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.942780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.943091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.943099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.943395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.943403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.943727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.943734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.944131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.944137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.944450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.944456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.944639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.944646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.945032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.945039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.945346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.945353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.945668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.945675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.945978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.945985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.946291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.946297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.946371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.946378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.946719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.946726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.947047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.947055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.947361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.947369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.947692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.947700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.947780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.947787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 [2024-06-10 14:38:19.948055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.948062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3223408 00:29:42.512 [2024-06-10 14:38:19.948357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.948365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3223408 00:29:42.512 [2024-06-10 14:38:19.948680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.512 [2024-06-10 14:38:19.948687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.512 qpair failed and we were unable to recover it. 00:29:42.512 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 3223408 ']' 00:29:42.513 [2024-06-10 14:38:19.949007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.949014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.513 [2024-06-10 14:38:19.949343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.949350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.513 [2024-06-10 14:38:19.949654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.949662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:42.513 [2024-06-10 14:38:19.949961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.949969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 14:38:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.513 [2024-06-10 14:38:19.950268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.950276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.950570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.950578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.950871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.950879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.951225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.951232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.951537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.951545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.951844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.951852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.952170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.952178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.952389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.952397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.952707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.952715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.953011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.953018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.953349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.953357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.953710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.953718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.954006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.954014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.954321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.954329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.954707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.954714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.955021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.955028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.955337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.955345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.955692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.955700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.955992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.956000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.956283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.956291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.956603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.956611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.956885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.956894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.957245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.957253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.957558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.957566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.957773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.957781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.958094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.958102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.513 [2024-06-10 14:38:19.958282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.513 [2024-06-10 14:38:19.958292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.513 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.958579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.958587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.958876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.958884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.959093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.959101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.959395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.959402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.959810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.959818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.960115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.960122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.960321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.960329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.960538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.960545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.960829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.960835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.961034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.961041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.961393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.961400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.961729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.961735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.962058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.962065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.962259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.962266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.962607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.962613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.962919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.962926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.963143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.963149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.963436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.963443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.963618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.963625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.963990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.963997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.964320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.964327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.964622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.964629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.964942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.964949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.965244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.965251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.965564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.965572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.965787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.965795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.966114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.966121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.966183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.966189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.514 [2024-06-10 14:38:19.966471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.514 [2024-06-10 14:38:19.966479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.514 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.966811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.966820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.967137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.967144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.967472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.967479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.967821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.967828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.968117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.968124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.968431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.968437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.968769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.968777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.969068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.969074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.969393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.969400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.969731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.969738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.969796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.969804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.969964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.969979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.970172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.970179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.970416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.970423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.970885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.970892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.971089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.971096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.971452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.971459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.971781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.971787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.972121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.972127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.972424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.972431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.972738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.972744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.973056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.973062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.973352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.973360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.973668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.973675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.973746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.973752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.974079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.974085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.974286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.974292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.974632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.974639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.974950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.974956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.975252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.975259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.975332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.975340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.975528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.975540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.975858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.975865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.515 qpair failed and we were unable to recover it. 00:29:42.515 [2024-06-10 14:38:19.976080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.515 [2024-06-10 14:38:19.976086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.976386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.976393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.976727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.976733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.976954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.976961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.977235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.977244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.977432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.977439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.977771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.977777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.978103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.978111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.978409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.978415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.978841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.978848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.979144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.979151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.979471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.979478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.979696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.979702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.979932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.979939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.980197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.980204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.980466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.980473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.980818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.980825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.981007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.981016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.981302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.981310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.981615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.981623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.981938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.981945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.982258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.982265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.982577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.982584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.982898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.982904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.983216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.983222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.983502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.983510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.983841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.983848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.984139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.984152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.984464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.984471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.984786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.984792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.985137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.985144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.985451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.985458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.985813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.985820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.986169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.986177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.516 [2024-06-10 14:38:19.986580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.516 [2024-06-10 14:38:19.986586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.516 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.986873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.986880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.986950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.986957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.987243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.987249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.987553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.987560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.987870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.987878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.988060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.988067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.988356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.988363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.988683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.988689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.988988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.988994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.989199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.989205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.989430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.989436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.989743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.989750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.990050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.990057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.990227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.990234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.990611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.990619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.990802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.990809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.991046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.991053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.991250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.991257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.991438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.991446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.991783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.991790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.992092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.992099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.992420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.992428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.992769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.992777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.993082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.993088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.993400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.993407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.993731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.993738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.994032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.994039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.994235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.994242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.994530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.994537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.994759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.994766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.995089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.995095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.995374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.995381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.995711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.995718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.517 qpair failed and we were unable to recover it. 00:29:42.517 [2024-06-10 14:38:19.996032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.517 [2024-06-10 14:38:19.996039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.996364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.996372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.996663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.996678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.996984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.996991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.997279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.997286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.997596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.997602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.997935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.997941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.998246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.998253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.998477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.998484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.998799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.998806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.999014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.999022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.999174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.999181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.999499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.999506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:19.999795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:19.999801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.000112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.000119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.000403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.000410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.000746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.000754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.001028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.001035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.001220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.001227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.001485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.001492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.001784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.001790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.001918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.001924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.002712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.002723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.003013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.003021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.003409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.003417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.003618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.003625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.003879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.003886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.003939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.003946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.004139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.004146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.004327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.004337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.004666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.004674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.004850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.004857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.004999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.005006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.005112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.518 [2024-06-10 14:38:20.005119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.518 qpair failed and we were unable to recover it. 00:29:42.518 [2024-06-10 14:38:20.005230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.005237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.005418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.005425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.005606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.005618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.005704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.005713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.005979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.005989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.006083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.006093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.006339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.006356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.006441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.006450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.006672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.006696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.006943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.006959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.007037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.007044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.007337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.007368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.007580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.007591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.007901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.007929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.008057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.008070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.008285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.008301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.008556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.008566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.008649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.008666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 [2024-06-10 14:38:20.008775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.519 [2024-06-10 14:38:20.008785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.519 qpair failed and we were unable to recover it. 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Write completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.519 Read completed with error (sct=0, sc=8) 00:29:42.519 starting I/O failed 00:29:42.520 Write completed with error (sct=0, sc=8) 00:29:42.520 starting I/O failed 00:29:42.520 Read completed with error (sct=0, sc=8) 00:29:42.520 starting I/O failed 00:29:42.520 Write completed with error (sct=0, sc=8) 00:29:42.520 starting I/O failed 00:29:42.520 Write completed with error (sct=0, sc=8) 00:29:42.520 starting I/O failed 00:29:42.520 [2024-06-10 14:38:20.009078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:42.520 [2024-06-10 14:38:20.009269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2230 is same with the state(5) to be set 00:29:42.520 [2024-06-10 14:38:20.009617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.009705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.010115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.010162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.010447] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:42.520 [2024-06-10 14:38:20.010491] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.520 [2024-06-10 14:38:20.010593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.010604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.010971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.010977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.011185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.011191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.011512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.011520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.011838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.011846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.012185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.012192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.012592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.012600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.012838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.012845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.013191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.013199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.013525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.013532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.013844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.013852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.014211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.014219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.014560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.014567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.014897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.014905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.015227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.015234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.015544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.015552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.015903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.015910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.016091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.016098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.016393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.016401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.016481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.016490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.016803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.016811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.017094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.017101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.017292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.017300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.017498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.017505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.017814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.017821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.018044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.018051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.018334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.018342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.018626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.520 [2024-06-10 14:38:20.018634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.520 qpair failed and we were unable to recover it. 00:29:42.520 [2024-06-10 14:38:20.018904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.018911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.019270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.019277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.019566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.019573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.019883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.019890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.020230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.020237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.020540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.020549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.020854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.020861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.021204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.021211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.021370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.021378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.021668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.021676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.021988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.021995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.022274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.022282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.022509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.022517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.022827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.022835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.023150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.023158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.023369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.023376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.023726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.023734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.024048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.024055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.024290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.024297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.024614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.024623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.024926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.024934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.025269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.025276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.025574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.025581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.025896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.025903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.026214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.026222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.026566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.026574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.026735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.026743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.027053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.027061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.027398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.027405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.027736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.027743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.028059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.028066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.028365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.028374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.028675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.028682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.028970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.521 [2024-06-10 14:38:20.028977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.521 qpair failed and we were unable to recover it. 00:29:42.521 [2024-06-10 14:38:20.029197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.029204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.029409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.029416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.029714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.029722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.029988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.029995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.030301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.030307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.030621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.030629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.030825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.030834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.031055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.031062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.031301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.031308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.031561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.031568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.031774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.031781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.032119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.032126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.032327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.032335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.032668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.032676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.032992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.032998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.033293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.033301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.033605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.033612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.033798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.033806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.034166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.034173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.034572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.034580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.034890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.034898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.035248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.035255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.035590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.035597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.035892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.035899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.036099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.036107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.036429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.036436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.036740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.036747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.036909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.036918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.037258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.037266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.037558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.037567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.037876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.037883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.038204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.038212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.038508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.038516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.038817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.522 [2024-06-10 14:38:20.038824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.522 qpair failed and we were unable to recover it. 00:29:42.522 [2024-06-10 14:38:20.039137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.039144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.039464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.039471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.039784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.039790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.040072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.040080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.040414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.040422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.040587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.040594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.040942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.040948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.041163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.041170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.041494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.041502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.041735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.041742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.523 [2024-06-10 14:38:20.041972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.041978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.042384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.042610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.042620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.042965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.042975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.043201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.043212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.043444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.043455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.043650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.043661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.044920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.044930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.045013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.045023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.045097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.045107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.045311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.045341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.045603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.045619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.045746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.045754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.046170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.046178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.046565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.046574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.046707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.046715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.046991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.046998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.047225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.523 [2024-06-10 14:38:20.047233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.523 qpair failed and we were unable to recover it. 00:29:42.523 [2024-06-10 14:38:20.047529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.047536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.047900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.047906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.048239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.048245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.048476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.048483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.048711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.048718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.049050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.049056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.049380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.049387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.049698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.049705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.050050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.050057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.050143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.050149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.050281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.050289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.050534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.050541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.050708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.050715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.051021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.051029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.051346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.051353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.051759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.051767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.052100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.052107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.052297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.052304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.052545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.052553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.052888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.052895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.053111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.053118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.053445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.053452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.053759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.053766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.053978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.053984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.054209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.054215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.054321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.054328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.054691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.054697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.054945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.054951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.524 [2024-06-10 14:38:20.055273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.524 [2024-06-10 14:38:20.055280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.524 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.055612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.055619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.055935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.055941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.056281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.056287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.056682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.057052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.057059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.057435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.057442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.057644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.057651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.057853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.057861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.058064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.058070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.058382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.058390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.058610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.058617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.058943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.058950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.059279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.059286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.059482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.059488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.059834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.059841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.060020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.060027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.060328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.060336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.060625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.060633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.060972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.060979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.061150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.061157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.061524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.061531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.061864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.061873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.062179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.062186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.062551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.062559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.062879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.062886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.063203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.063210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.063576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.063583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.063759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.063766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.064065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.064073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.064432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.064439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.064757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.064764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.064980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.064987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.065168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.525 [2024-06-10 14:38:20.065175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.525 qpair failed and we were unable to recover it. 00:29:42.525 [2024-06-10 14:38:20.065386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.065393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.065664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.065671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.066019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.066026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.066321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.066329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.066655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.066663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.066874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.066881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.067200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.067207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.067400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.067407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.067593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.067599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.067921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.067928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.068221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.068228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.068552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.068560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.068792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.068799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.068971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.068978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.069318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.069326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.069693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.069700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.526 [2024-06-10 14:38:20.069935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-06-10 14:38:20.069941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.526 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.070257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.070266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.070646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.070654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.070859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.070866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.071200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.071207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.071393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.071401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.071671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.071678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.071878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.071885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.072191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.072198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.072515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.072522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.072822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.072828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.073162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.073169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.073490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.073499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.073887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.073895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.074223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.074230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.074540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.074548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.074736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.074743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.075062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.805 [2024-06-10 14:38:20.075068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.805 qpair failed and we were unable to recover it. 00:29:42.805 [2024-06-10 14:38:20.075277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.075283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.075601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.075607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.075961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.075968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.076280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.076286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.076648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.076656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.076968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.076975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.077273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.077280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.077640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.077647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.077964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.077971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.078266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.078273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.078567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.078574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.078882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.078890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.079084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.079092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.079435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.079442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.079764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.079771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.080135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.080142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.080274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.080281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.080579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.080586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.080905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.080912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.081086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.081093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.081323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.081331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.081706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.081713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.082111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.082118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.082414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.082421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.082757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.082764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.082935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.082942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.083223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.083230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.083455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.083462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.083666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.083673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.083857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.083864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.084191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.084197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.084511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.084518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.084838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.084846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.085148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.806 [2024-06-10 14:38:20.085155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.806 qpair failed and we were unable to recover it. 00:29:42.806 [2024-06-10 14:38:20.085427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.085435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.085666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.085672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.085880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.085887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.086180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.086188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.086363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.086371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.086697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.086704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.086880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.086887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.087286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.087293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.087612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.087619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.087951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.087958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.088326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.088333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.088547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.088555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.088723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.088730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.089111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.089117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.089451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.089457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.089780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.089786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.090095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.090102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.090483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.090490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.090704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.090710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.090995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.091001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.091289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.091609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.091616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.091905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.091912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.092101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.092107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.092524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.092531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.092845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.092852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.092886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.807 [2024-06-10 14:38:20.093211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.093219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.093561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.093568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.093891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.093899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.094252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.094260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.094353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.094361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.094668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.094675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.807 [2024-06-10 14:38:20.095004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.807 [2024-06-10 14:38:20.095012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.807 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.095330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.095338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.095655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.095662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.095996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.096004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.096326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.096334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.096648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.096655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.096978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.096987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.097321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.097328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.097753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.097760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.098027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.098034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.098361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.098369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.098710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.098717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.099109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.099116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.099298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.099306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.099531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.099538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.099851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.099858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.100182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.100189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.100530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.100537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.100653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.100659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.100933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.100940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.101162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.101168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.101369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.101379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.101843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.101850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.102197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.102204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.102503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.102510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.102831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.102839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.103153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.103161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.103487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.103494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.103676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.103683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.103984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.103990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.104285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.104293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.104686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.104693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.104986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.104994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.105272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.808 [2024-06-10 14:38:20.105278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.808 qpair failed and we were unable to recover it. 00:29:42.808 [2024-06-10 14:38:20.105670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.105677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.105733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.105738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.105856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.105863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.106031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.106037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.106199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.106206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.106546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.106553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.106783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.106790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.107136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.107142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.107321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.107328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.107559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.107565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.107853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.107860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.108184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.108191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.108407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.108414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.108616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.108622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.109003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.109011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.109194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.109201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.109390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.109398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.109574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.109582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.109873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.109880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.110209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.110216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.110502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.110510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.110802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.110809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.111116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.111124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.111324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.111332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.111656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.111663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.112026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.112032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.112360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.112367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.112700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.112709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.113060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.113068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.113387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.113395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.113668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.113675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.113836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.113843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.114159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.809 [2024-06-10 14:38:20.114166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.809 qpair failed and we were unable to recover it. 00:29:42.809 [2024-06-10 14:38:20.114491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.114498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.114889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.114896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.115196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.115204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.115393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.115400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.115749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.115757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.115936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.115944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.116263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.116270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.116590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.116597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.116954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.116961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.117280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.117287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.117491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.117499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.117688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.117696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.117853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.117860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.118164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.118171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.118505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.118513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.118669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.118676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.118869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.118876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.119146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.119154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.119577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.119585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.119890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.119897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.120214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.120221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.120531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.120538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.120585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.120592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.120888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.120895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.121111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.121117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.121486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.121493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.810 [2024-06-10 14:38:20.121817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.810 [2024-06-10 14:38:20.121825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.810 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.122139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.122146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.122294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.122301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.122505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.122512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.122812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.122819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.123152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.123159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.123526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.123534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.123846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.123853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.124091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.124100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.124322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.124330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.124529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.124537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.124865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.124873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.124958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.124965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.125348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.125356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.125456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.125463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.125773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.125780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.126105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.126113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.126409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.126418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.126517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.126524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.126746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.126757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.127180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.127210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.127468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.127505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.127680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.127695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.127790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.127800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.128224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.128234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.128425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.128433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.128619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.128625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.128853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.128860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.129189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.129197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.129455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.129463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.129646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.129654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.129844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.129851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.130169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.130177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.130494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.130501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.811 [2024-06-10 14:38:20.130801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.811 [2024-06-10 14:38:20.130808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.811 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.131088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.131094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.131505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.131512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.131680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.131687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.131927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.131934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.132099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.132106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.132478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.132486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.132789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.132795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.132979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.132986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.133176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.133183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.133499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.133506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.133682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.133689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.133893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.133900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.134174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.134180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.134497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.134506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.134825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.134832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.135014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.135373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.135380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.135625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.135631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.135819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.135826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.135975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.135982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.136295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.136302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.136601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.136609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.136954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.136962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.137145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.137153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.137486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.137494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.137808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.137815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.138019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.138026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.138364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.138372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.138688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.138697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.139046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.139053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.139359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.139367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.139683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.139691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.139999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.812 [2024-06-10 14:38:20.140006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.812 qpair failed and we were unable to recover it. 00:29:42.812 [2024-06-10 14:38:20.140192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.140199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.140420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.140427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.140765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.140773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.141077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.141084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.141427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.141434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.141811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.141817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.142151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.142157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.142466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.142473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.142719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.142726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.143059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.143067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.143387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.143394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.143816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.143823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.144123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.144130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.144456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.144462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.144874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.144881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.145255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.145262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.145578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.145585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.145898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.145905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.146240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.146247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.146541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.146548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.146749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.146758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.147084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.147091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.147404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.147411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.147716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.147722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.147958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.147965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.148348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.148355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.148677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.148684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.148882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.148889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.149064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.149072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.149253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.149260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.149558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.149565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.149771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.149778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.150063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.150071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.150412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.813 [2024-06-10 14:38:20.150419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.813 qpair failed and we were unable to recover it. 00:29:42.813 [2024-06-10 14:38:20.150621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.150629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.150982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.150989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.151277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.151285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.151610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.151617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.152001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.152008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.152321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.152329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.152646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.152652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.152960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.152966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.153200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.153207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.153621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.153629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.153836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.153843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.154148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.154155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.154491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.154498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.154824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.154832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.155042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.155050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.155350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.155358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.155694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.155700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.155892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.155898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.156215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.156222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.156521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.156529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.156857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.156863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.157182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.157189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.157503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.157511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.157698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.157705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.158021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.158028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.158197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.158204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.158589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.158597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.158789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.158797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.159115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.159122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.159442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.159448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.159777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.159784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.160149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.160156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.160483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.160489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.160627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.814 [2024-06-10 14:38:20.160634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.814 qpair failed and we were unable to recover it. 00:29:42.814 [2024-06-10 14:38:20.160914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.160921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.161232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.161239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.161522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.161530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.161832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.161840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.162164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.815 [2024-06-10 14:38:20.162184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.162193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 [2024-06-10 14:38:20.162192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.162205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.815 [2024-06-10 14:38:20.162212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.815 [2024-06-10 14:38:20.162217] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.815 [2024-06-10 14:38:20.162405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.162413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.162401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:42.815 [2024-06-10 14:38:20.162659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.162569] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:42.815 [2024-06-10 14:38:20.162667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.162712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:42.815 [2024-06-10 14:38:20.162713] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:42.815 [2024-06-10 14:38:20.162885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.162892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.163105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.163111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.163410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.163417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.163608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.163615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.163814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.163821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.164159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.164166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.164486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.164492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.164732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.164739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.164956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.164962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.165094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.165100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.165384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.165391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.165701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.165708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.166034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.166043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.166232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.166239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.166671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.166679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.167024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.167031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.167358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.167365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.815 [2024-06-10 14:38:20.167686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.815 [2024-06-10 14:38:20.167693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.815 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.168014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.168021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.168204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.168211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.168460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.168467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.168792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.168799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.169114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.169123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.169338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.169346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.169690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.169697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.169909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.169916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.170245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.170253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.170450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.170458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.170660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.170667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.170968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.170975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.171182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.171190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.171510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.171518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.171729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.171735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.172070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.172077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.172404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.172411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.172722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.172729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.173035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.173041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.173216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.173223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.173467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.173474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.173764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.173770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.174055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.174061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.174395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.174403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.174729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.174736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.174947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.174953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.175325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.175333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.175515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.175523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.175835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.175842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.176182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.176190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.176552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.176561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.176757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.176764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.177069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.177076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.816 qpair failed and we were unable to recover it. 00:29:42.816 [2024-06-10 14:38:20.177413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.816 [2024-06-10 14:38:20.177420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.177769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.177776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.178089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.178097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.178262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.178271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.178552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.178559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.178890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.178898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.179216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.179224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.179538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.179546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.179857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.179864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.180010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.180018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.180385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.180393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.180579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.180588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.180891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.180898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.181252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.181260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.181532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.181539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.181887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.181895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.182108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.182115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.182435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.182443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.182634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.182641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.183015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.183022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.183312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.183325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.183607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.183615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.183947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.183955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.184271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.184278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.184584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.184592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.184876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.184883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.185172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.185180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.185475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.185483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.185797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.185804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.186171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.186178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.186486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.186494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.186824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.186831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.187144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.187152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.187465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.817 [2024-06-10 14:38:20.187473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.817 qpair failed and we were unable to recover it. 00:29:42.817 [2024-06-10 14:38:20.187785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.187792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.188102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.188110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.188279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.188286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.188367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.188375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.188658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.188666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.188738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.188745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.189069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.189078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.189373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.189381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.189564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.189572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.189878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.189886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.190114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.190121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.190329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.190336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.190519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.190527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.190826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.190833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.191006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.191013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.191346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.191354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.191541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.191548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.191734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.191743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.192076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.192083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.192386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.192393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.192728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.192735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.193062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.193069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.193366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.193372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.193575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.193583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.193857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.193865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.194033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.194040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.194442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.194449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.194786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.194793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.195142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.195149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.195332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.195339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.195681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.195688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.196025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.196032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.196348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.196355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.818 [2024-06-10 14:38:20.196523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.818 [2024-06-10 14:38:20.196530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.818 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.196856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.196863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.197188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.197194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.197503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.197510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.197843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.197851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.198243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.198251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.198524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.198531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.198711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.198717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.199080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.199087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.199447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.199455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.199749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.199756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.200083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.200090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.200397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.200405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.200588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.200595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.200889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.200897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.201212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.201220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.201531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.201538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.201841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.201849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.202172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.202179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.202309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.202331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.202630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.202638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.202844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.202851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.203168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.203176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.203616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.203623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.203924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.203934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.204249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.204257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.204569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.204576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.204763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.204770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.205121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.205128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.205448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.205455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.205789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.205795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.206093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.206100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.206462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.206470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.206777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.206784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.819 [2024-06-10 14:38:20.207122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.819 [2024-06-10 14:38:20.207129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.819 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.207458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.207465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.207825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.207832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.207880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.207886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.208177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.208184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.208390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.208398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.208619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.208626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.208920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.208928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.209282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.209289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.209649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.209656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.209824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.209833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.210023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.210030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.210203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.210211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.210505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.210513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.210843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.210850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.211012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.211019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.211332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.211340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.211659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.211666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.211978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.211985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.212181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.212188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.212537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.212544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.212828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.212834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.213051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.213058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.213395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.213402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.213597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.213603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.213913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.213920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.214234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.214241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.214397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.214404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.214722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.214729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.215044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.215051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.215370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.215379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.820 [2024-06-10 14:38:20.215663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.820 [2024-06-10 14:38:20.215669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.820 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.215996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.216002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.216185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.216192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.216353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.216360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.216533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.216541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.216853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.216860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.217046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.217054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.217216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.217224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.217562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.217571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.217884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.217891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.218210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.218218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.218379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.218386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.218692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.218699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.219020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.219028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.219210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.219217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.219541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.219549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.219713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.219720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.220062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.220071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.220364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.220371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.220673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.220679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.220966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.220973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.221261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.221268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.221531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.221537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.221685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.221692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.221963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.221971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.222301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.222308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.222627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.222635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.222847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.222854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.223171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.223178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.223572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.223578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.223929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.223936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.224229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-06-10 14:38:20.224235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.821 qpair failed and we were unable to recover it. 00:29:42.821 [2024-06-10 14:38:20.224566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.224573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.224883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.224890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.225046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.225052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.225334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.225341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.225626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.225634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.225962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.225970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.226245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.226252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.226582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.226591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.226789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.226796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.227110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.227118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.227433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.227441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.227851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.227858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.228052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.228060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.228245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.228252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.228426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.228434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.228729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.228736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.229072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.229079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.229325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.229333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.229491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.229499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.229673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.229680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.230010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.230019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.230200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.230208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.230574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.230581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.230822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.230828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.231101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.231108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.231290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.231297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.231653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.231660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.231974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.231982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.232281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.232288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.232607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.232614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.232904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.232912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.233209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.233216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.233542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.233550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.233591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-06-10 14:38:20.233597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.822 qpair failed and we were unable to recover it. 00:29:42.822 [2024-06-10 14:38:20.233914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.233922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.234230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.234238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.234562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.234570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.234751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.234758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.234983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.234990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.235308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.235320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.235647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.235654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.235866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.235873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.236171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.236179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.236425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.236432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.236777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.236784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.236990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.236998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.237318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.237326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.237642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.237651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.237974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.237981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.238182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.238190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.238430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.238438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.238613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.238621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.238970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.238977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.239280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.239288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.239594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.239600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.239875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.239881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.240198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.240206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.240394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.240401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.240721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.240727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.241021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.241028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.241237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.241244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.241433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.241439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.241724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.241731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.242047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.242055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.242396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.242403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.242709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.242716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.242885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.242891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.823 [2024-06-10 14:38:20.243175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.823 [2024-06-10 14:38:20.243182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.823 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.243490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.243497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.243840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.243847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.244143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.244151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.244466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.244473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.244843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.244850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.245170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.245176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.245495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.245502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.245831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.245837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.246152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.246159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.246369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.246376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.246708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.246715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.246907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.246913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.247077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.247083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.247409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.247415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.247753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.247759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.248075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.248083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.248263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.248269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.248677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.248684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.248877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.248883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.249071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.249079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.249383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.249395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.249712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.249718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.250009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.250017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.250180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.250186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.250471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.250477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.250658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.250665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.250976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.250983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.251281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.251288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.251636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.251643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.251970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.251977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.252329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.252336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.252648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.252656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.252817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.824 [2024-06-10 14:38:20.252823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.824 qpair failed and we were unable to recover it. 00:29:42.824 [2024-06-10 14:38:20.253067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.253073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.253397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.253404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.253724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.253731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.253935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.253941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.254266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.254272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.254600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.254607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.254798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.254805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.254981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.254988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.255305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.255312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.255630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.255637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.255979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.255986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.256311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.256322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.256490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.256496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.256665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.256674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.256978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.256985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.257309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.257324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.257706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.257712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.258026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.258032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.258310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.258322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.258500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.258507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.258807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.258815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.259110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.259117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.259488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.259495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.259768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.259774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.260112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.260119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.260406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.260413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.260728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.260734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.260901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.260908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.261217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.261224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.261561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.261568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.261730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.261737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.261980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.261986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.262262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.262269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.825 [2024-06-10 14:38:20.262657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.825 [2024-06-10 14:38:20.262665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.825 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.262983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.262991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.263322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.263330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.263627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.263633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.263929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.263936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.264206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.264212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.264466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.264473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.264832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.264838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.265155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.265161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.265337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.265346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.265508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.265515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.265709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.265715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.265758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.265765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.266066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.266073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.266383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.266390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.266725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.266732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.266927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.266933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.266975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.266982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.267187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.267195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.267505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.267513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.267710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.267719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.268030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.268037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.268230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.268237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.268345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.268352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.268533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.268540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.268826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.268833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.269150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.269157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.269469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.269477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.269667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.826 [2024-06-10 14:38:20.269673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.826 qpair failed and we were unable to recover it. 00:29:42.826 [2024-06-10 14:38:20.269910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.269916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.270065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.270073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.270385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.270392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.270544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.270550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.270824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.270831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.271173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.271180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.271388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.271394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.271732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.271739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.272070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.272077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.272403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.272410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.272741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.272748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.272935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.272942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.273160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.273167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.273489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.273496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.273825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.273833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.274016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.274023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.274385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.274392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.274712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.274718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.275074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.275081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.275389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.275396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.275736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.275744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.275903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.275910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.276226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.276233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.276549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.276556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.276883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.276891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.277060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.277067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.277384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.277392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.277706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.277714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.278022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.278029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.278194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.278203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.278489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.278496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.278798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.278807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.279100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.279107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.827 [2024-06-10 14:38:20.279260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.827 [2024-06-10 14:38:20.279267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.827 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.279457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.279464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.279765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.279771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.280094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.280101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.280303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.280309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.280613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.280620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.280959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.280966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.281288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.281295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.281591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.281598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.281932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.281939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.282255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.282264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.282588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.282595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.282928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.282935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.283258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.283266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.283593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.283601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.283959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.283967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.284128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.284136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.284550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.284557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.284734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.284741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.285072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.285078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.285369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.285376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.285547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.285554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.285824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.285831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.286137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.286143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.286330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.286337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.286655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.286663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.286884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.286890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.287189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.287196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.287524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.287531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.287713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.287720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.287920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.287927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.288087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.288093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.288250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.288257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.288511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.288519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.828 [2024-06-10 14:38:20.288824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.828 [2024-06-10 14:38:20.288832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.828 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.288871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.288877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.289084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.289092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.289401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.289408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.289729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.289737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.290043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.290049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.290325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.290332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.290519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.290526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.290863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.290870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.291240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.291248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.291479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.291486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.291699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.291705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.291998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.292005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.292182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.292189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.292550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.292558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.292748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.292755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.293055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.293062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.293362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.293369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.293715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.293722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.294044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.294052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.294212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.294219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.294582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.294590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.294752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.294759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.295102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.295110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.295422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.295431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.295644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.295652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.295809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.295816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.296110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.296117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.296472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.296479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.296786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.296794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.297110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.297117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.297290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.297299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.297624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.297631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.297943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.297949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.829 qpair failed and we were unable to recover it. 00:29:42.829 [2024-06-10 14:38:20.298151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.829 [2024-06-10 14:38:20.298159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.298362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.298369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.298551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.298559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.298848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.298855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.299189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.299198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.299237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.299243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.299520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.299527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.299877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.299884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.300181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.300188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.300486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.300493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.300750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.300758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.301069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.301077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.301410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.301418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.301629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.301637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.301959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.301965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.302286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.302293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.302469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.302476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.302794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.302801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.302975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.302982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.303295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.303302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.303617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.303626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.303949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.303956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.304273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.304280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.304605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.304613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.304931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.304938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.305259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.305266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.305467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.305474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.305784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.305791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.306115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.306122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.306311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.306322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.306622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.306628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.306948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.306955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.307139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.307146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.307457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.307464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.830 qpair failed and we were unable to recover it. 00:29:42.830 [2024-06-10 14:38:20.307779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.830 [2024-06-10 14:38:20.307786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.308132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.308138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.308435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.308441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.308773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.308779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.308998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.309004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.309295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.309302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.309653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.309661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.309858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.309864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.310073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.310081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.310393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.310400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.310614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.310621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.310838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.310846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.311172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.311179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.311419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.311426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.311579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.311585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.311762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.311768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.312079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.312099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.312331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.312339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.312658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.312666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.312875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.312882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.313054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.313061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.313375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.313382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.313712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.313718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.313887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.313894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.314109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.314116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.314379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.314387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.314680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.314687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.314834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.314841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.315159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.831 [2024-06-10 14:38:20.315166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.831 qpair failed and we were unable to recover it. 00:29:42.831 [2024-06-10 14:38:20.315325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.315331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.315625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.315633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.316016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.316022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.316310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.316321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.316634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.316641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.316959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.316965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.317279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.317286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.317477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.317483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.317830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.317837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.318170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.318177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.318408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.318415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.318794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.318801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.319122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.319129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.319456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.319463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.319785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.319792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.320137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.320144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.320468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.320475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.320791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.320799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.320972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.320978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.321278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.321284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.321489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.321496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.321676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.321682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.321990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.321997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.322186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.322193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.322507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.322514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.322862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.322869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.323053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.323060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.323170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.323178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.323455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.323462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.323809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.323815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.324135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.324141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.324341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.324348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.324534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.324540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.832 [2024-06-10 14:38:20.324837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.832 [2024-06-10 14:38:20.324845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.832 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.325148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.325155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.325469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.325476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.325691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.325697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.326016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.326022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.326348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.326660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.326667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.327060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.327067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.327394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.327401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.327717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.327724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.327928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.327935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.328115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.328122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.328438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.328445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.328796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.328802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.328942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.328947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.329132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.329138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.329338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.329345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.329627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.329634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.329794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.329802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.329999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.330006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.330169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.330176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.330469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.330477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.330799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.330806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.330998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.331004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.331285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.331292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.331622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.331629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.331938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.331944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.332271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.332277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.332649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.332657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.332973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.332979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.333269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.333276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.333667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.333675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.333944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.333951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.334263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.334270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.833 qpair failed and we were unable to recover it. 00:29:42.833 [2024-06-10 14:38:20.334594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.833 [2024-06-10 14:38:20.334603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.334924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.334931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.335120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.335128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.335324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.335331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.335646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.335654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.335969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.335976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.336293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.336300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.336607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.336615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.336770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.336777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.337062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.337069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.337243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.337250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.337419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.337426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.337709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.337717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.338033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.338040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.338205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.338213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.338607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.338614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.338937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.338944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.339298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.339305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.339471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.339478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.339795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.339803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.339982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.339990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.340307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.340318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.340660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.340666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.340842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.340848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.341165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.341171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.341350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.341357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.341678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.341685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.342010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.342017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.342343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.342350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.342647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.342654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.343007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.343014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.343353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.343360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.343551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.343558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.343727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.343735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.834 qpair failed and we were unable to recover it. 00:29:42.834 [2024-06-10 14:38:20.343914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.834 [2024-06-10 14:38:20.343921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.344236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.344242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.344606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.344612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.344930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.344936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.345261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.345267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.345640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.345647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.345966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.345974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.346140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.346147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.346468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.346476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.346783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.346791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.347109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.347115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.347398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.347405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.347444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.347450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.347616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.347623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.347846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.347853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.348167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.348174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.348475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.348483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.348872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.348878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.349049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.349055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.349221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.349228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.349515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.349522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.349857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.349863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.350195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.350203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.350464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.350471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.350808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.350815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.351118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.351125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.351305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.351312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.351638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.351645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.351957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.351964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.352363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.352371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.352710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.352718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.352955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.352961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.353364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.353371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.353660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.835 [2024-06-10 14:38:20.353668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.835 qpair failed and we were unable to recover it. 00:29:42.835 [2024-06-10 14:38:20.353838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.353845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.354118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.354126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.354455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.354463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.354823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.354831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.355041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.355048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.355233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.355240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.355555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.355562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.355729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.355736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.355889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.355896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.356184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.356191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.356368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.356376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.356751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.356765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.357061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.357070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.357262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.357269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.357598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.357606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.357903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.357910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.358229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.358237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.358559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.358567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.358923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.358930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.359093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.359100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.359505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.359512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.359740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.359746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.360101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.360109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.360416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.360424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.360603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.360610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.360809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.360815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.361154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.361160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.361502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.836 [2024-06-10 14:38:20.361509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.836 qpair failed and we were unable to recover it. 00:29:42.836 [2024-06-10 14:38:20.361809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.361817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.361999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.362005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.362350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.362356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.362691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.362700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.362887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.362894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.363173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.363180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.363486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.363493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.363782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.363790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.364092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.364099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.364288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.364296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.364593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.364601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.364754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.364762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.365073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.365081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.365241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.365248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.365537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.365545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.365885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.365892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.366240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.366247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.366567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.366575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.366891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.366899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.367081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.367088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.367436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.367443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.367685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.367691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.367896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.367902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.368238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.368245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.368423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.368433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.368744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.368752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.369084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.369092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.369256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.369262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.369545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.369552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.369630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.369636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.369803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.369810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.370163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.370170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.370484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.370491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.837 [2024-06-10 14:38:20.370878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.837 [2024-06-10 14:38:20.370885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.837 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.371134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.371141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.371359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.371367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.371661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.371667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.371965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.371973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.372133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.372140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.372452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.372459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.372762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.372769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.372991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.372997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.373035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.373041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.373324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.373333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.373633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.373640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.373822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.373828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.374152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.374159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.374449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.374456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.374498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.374505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.374677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.374683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.374995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.375001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.375339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.375347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.375658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.375665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.375865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.375872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.376037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.376044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.376251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.376258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.376559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.376566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.376881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.376887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.377222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.377231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.377531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.377539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.377853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.377860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.378019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.378026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.378189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.378196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.378388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.378396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.378710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.378719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.379006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.379013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.379235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.838 [2024-06-10 14:38:20.379242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.838 qpair failed and we were unable to recover it. 00:29:42.838 [2024-06-10 14:38:20.379397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.379404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.379695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.379703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.379996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.380003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.380163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.380170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.380358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.380365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.380582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.380589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.380805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.380812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.381093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.381100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.381320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.381328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:42.839 [2024-06-10 14:38:20.381597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.839 [2024-06-10 14:38:20.381603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:42.839 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.381801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.381809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.381992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.382000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.382195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.382201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.382325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.382332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.382646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.382653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.382928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.382935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.383247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.383254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.383605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.383612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.383927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.383934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.384219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.384225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.384549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.384556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.123 [2024-06-10 14:38:20.384846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.123 [2024-06-10 14:38:20.384853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.123 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.385147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.385153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.385362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.385369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.385716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.385723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.385801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.385808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.386162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.386168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.386335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.386342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.386624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.386631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.386938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.386945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.387300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.387307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.387495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.387503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.387825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.387833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.388167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.388175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.388456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.388464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.388654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.388662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.389049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.389056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.389402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.389411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.389709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.389716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.390030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.390037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.390235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.390243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.390562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.390569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.390865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.390871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.391104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.391111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.391426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.391432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.391773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.391779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.391941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.391947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.392182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.392189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.392474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.392480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.392864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.392870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.393044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.393050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.393384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.393391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.393576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.393583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.393767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.393773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.394048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.124 [2024-06-10 14:38:20.394055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.124 qpair failed and we were unable to recover it. 00:29:43.124 [2024-06-10 14:38:20.394385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.394391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.394698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.394705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.394902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.394908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.395137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.395143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.395319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.395326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.395502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.395509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.395803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.395809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.396159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.396165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.396481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.396488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.396802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.396810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.397112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.397119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.397435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.397442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.397753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.397760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.398066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.398073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.398405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.398412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.398604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.398612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.398798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.398806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.399149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.399157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.399355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.399362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.399510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.399518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.399671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.399679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.400018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.400025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.400411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.400418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.400726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.400735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.400924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.400931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.401259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.401266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.401623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.401631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.401936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.401944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.402237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.402244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.402420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.402428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.402739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.402747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.403037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.403045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.403368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.403375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.125 [2024-06-10 14:38:20.403539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.125 [2024-06-10 14:38:20.403546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.125 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.403866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.403873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.404032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.404040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.404185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.404192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.404477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.404484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.404818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.404825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.405140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.405147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.405422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.405431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.405515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.405521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.405643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.405650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.405986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.405993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.406333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.406341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.406576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.406583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.406867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.406874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.407197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.407204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.407363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.407371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.407651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.407660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.407980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.407988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.408144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.408151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.408518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.408526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.408846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.408853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.409008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.409015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.409328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.409336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.409684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.409692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.409914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.409921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.410237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.410244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.410535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.410543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.410927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.410934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.411236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.411244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.411422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.411429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.411711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.411719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.412033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.412040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.412342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.412349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.126 qpair failed and we were unable to recover it. 00:29:43.126 [2024-06-10 14:38:20.412535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.126 [2024-06-10 14:38:20.412542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.412714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.412721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.413060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.413067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.413250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.413257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.413529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.413538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.413757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.413764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.414155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.414162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.414480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.414487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.414849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.414856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.415032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.415039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.415334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.415341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.415528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.415535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.415837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.415844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.416024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.416032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.416250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.416257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.416575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.416583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.416898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.416905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.417097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.417104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.417386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.417393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.417564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.417571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.417905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.417913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.418225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.418233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.418304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.418310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.418495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.418503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.418791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.418797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.419105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.419112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.419418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.419425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.419786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.419794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.420094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.420101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.420396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.420403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.420738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.420745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.421061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.421067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.421366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.421373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.421695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.127 [2024-06-10 14:38:20.421701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.127 qpair failed and we were unable to recover it. 00:29:43.127 [2024-06-10 14:38:20.421737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.421744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.422043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.422049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.422360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.422368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.422691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.422698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.422994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.423001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.423160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.423166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.423485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.423502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.423800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.423806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.424123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.424129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.424446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.424454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.424763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.424770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.425081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.425088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.425406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.425413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.425778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.425785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.426097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.426104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.426417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.426425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.426760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.426768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.427080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.427088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.427412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.427420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.427604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.427612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.427773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.427781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.428094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.428102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.428416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.428423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.428648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.428655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.429030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.429037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.429281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.429288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.128 qpair failed and we were unable to recover it. 00:29:43.128 [2024-06-10 14:38:20.429635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.128 [2024-06-10 14:38:20.429642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.429950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.429957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.430286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.430293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.430601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.430612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.430929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.430937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.431161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.431168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.431485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.431492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.431671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.431678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.431957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.431964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.432281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.432287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.432442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.432449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.432607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.432613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.432935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.432942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.433119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.433126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.433353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.433360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.433520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.433527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.433777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.433783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.434216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.434223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.434398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.434407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.434611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.434618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.434917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.434925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.435301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.435308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.435624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.435632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.435963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.435970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.436293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.436300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.436618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.436626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.436918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.436927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.437131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.437138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.437464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.437472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.437660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.437667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.437983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.437990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.438178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.438185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.438524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.438532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.129 [2024-06-10 14:38:20.438692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.129 [2024-06-10 14:38:20.438698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.129 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.438931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.438937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.439240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.439247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.439350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.439356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.439653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.439661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.439979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.439986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.440296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.440303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.440604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.440611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.440956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.440962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.441283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.441289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.441481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.441489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.441805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.441813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.442051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.442058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.442374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.442381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.442683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.442698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.442878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.442884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.443166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.443173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.443385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.443392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.443731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.443739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.443957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.443964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.444183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.444190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.444485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.444493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.444670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.444677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.444865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.444872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.445056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.445064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.445356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.445363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.445693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.445701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.446038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.446046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.446333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.446341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.446665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.446673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.446885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.446893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.447210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.447217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.447412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.447420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.447734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.130 [2024-06-10 14:38:20.447741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.130 qpair failed and we were unable to recover it. 00:29:43.130 [2024-06-10 14:38:20.447815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.447821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.448140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.448148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.448324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.448331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.448642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.448649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.448976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.448983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.449164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.449171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.449211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.449217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.449382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.449389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.449668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.449675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.449993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.449999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.450176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.450185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.450477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.450485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.450813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.450820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.451043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.451050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.451354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.451361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.451568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.451575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.451902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.451910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.452229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.452237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.452568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.452576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.452923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.452930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.453264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.453271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.453585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.453593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.453933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.453941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.454121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.454128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.454312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.454323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.454634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.454642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.454929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.454936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.455251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.455258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.455568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.455574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.455786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.455793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.456106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.456113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.456411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.456417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.456745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.456752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.457064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.131 [2024-06-10 14:38:20.457071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.131 qpair failed and we were unable to recover it. 00:29:43.131 [2024-06-10 14:38:20.457244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.457250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.457577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.457584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.457922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.457929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.458221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.458228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.458390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.458397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.458684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.458691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.459080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.459087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.459379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.459385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.459581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.459588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.459900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.459907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.460234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.460610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.460618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.460984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.460990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.461302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.461309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.461621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.461629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.461938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.461945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.462123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.462130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.462419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.462427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.462770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.462779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.463087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.463094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.463263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.463271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.463609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.463616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.463795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.463803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.464097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.464104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.464283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.464290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.464450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.464458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.464782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.464789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.465088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.465095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.465417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.465424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.465743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.465750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.466070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.466077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.466260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.466266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.466427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.466434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.132 [2024-06-10 14:38:20.466743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.132 [2024-06-10 14:38:20.466750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.132 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.467089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.467096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.467445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.467453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.467745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.467752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.468055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.468061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.468263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.468269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.468576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.468583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.468893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.468899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.469199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.469207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.469516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.469524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.469726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.469733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.470026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.470033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.470339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.470347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.470628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.470636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.470818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.470826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.471007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.471015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.471400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.471407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.471721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.471728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.472119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.472126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.472319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.472326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.472691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.472698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.472997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.473005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.473293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.473301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.473473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.473481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.473784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.473792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.474085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.474093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.474433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.474440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.474773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.474780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.475095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.475101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.475420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.475428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.475788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.475795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.476110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.476116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.133 qpair failed and we were unable to recover it. 00:29:43.133 [2024-06-10 14:38:20.476426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.133 [2024-06-10 14:38:20.476433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.476851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.476857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.477092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.477099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.477426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.477434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.477735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.477743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.478046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.478054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.478367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.478374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.478677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.478683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.478990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.478997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.479275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.479282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.479587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.479596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.479908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.479915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.480136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.480144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.480453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.480459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.480741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.480748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.481070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.481076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.481243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.481249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.481593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.481599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.481930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.481936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.482231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.482238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.482540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.482547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.482861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.482868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.483168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.483175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.483327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.483335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.483501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.483509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.483829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.483836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.484158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.484166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.484465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.484472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.134 qpair failed and we were unable to recover it. 00:29:43.134 [2024-06-10 14:38:20.484764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.134 [2024-06-10 14:38:20.484771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.485092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.485098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.485416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.485423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.485733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.485740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.486054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.486062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.486343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.486351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.486549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.486556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.486882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.486889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.487166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.487172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.487354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.487363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.487690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.487696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.487898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.487905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.488233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.488240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.488426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.488434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.488622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.488629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.488939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.488947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.489128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.489135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.489470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.489477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.489809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.489816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.490026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.490033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.490377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.490386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.490690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.490698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.491001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.491008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.491345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.491352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.491395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.491401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.491687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.491695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.492013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.492020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.492188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.492197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.492401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.492408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.492603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.492610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.492889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.492896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.493113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.135 [2024-06-10 14:38:20.493120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.135 qpair failed and we were unable to recover it. 00:29:43.135 [2024-06-10 14:38:20.493307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.493320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.493478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.493485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.493817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.493824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.493984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.493991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.494300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.494308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.494508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.494515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.494800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.494807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.495119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.495126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.495353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.495360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.495536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.495543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.495832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.495839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.496132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.496139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.496317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.496324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.496579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.496585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.496863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.496869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.497185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.497192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.497526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.497533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.497871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.497879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.498171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.498177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.498381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.498388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.498671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.498678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.499009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.499016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.499306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.499317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.499626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.499632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.499955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.499962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.500077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.500084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.500387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.500395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.500717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.500725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.501046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.501053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.501394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.501401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.501714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.501721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.502041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.502048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.502090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.502097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.502401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.136 [2024-06-10 14:38:20.502408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.136 qpair failed and we were unable to recover it. 00:29:43.136 [2024-06-10 14:38:20.502748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.502755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.502919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.502926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.503305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.503311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.503619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.503625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.503950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.503956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.504117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.504124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.504570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.504577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.504892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.504900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.505233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.505239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.505387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.505394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.505677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.505684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.506017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.506024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.506187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.506194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.506490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.506497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.506793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.506799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.507123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.507129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.507431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.507439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.507602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.507608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.507903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.507910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.508252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.508258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.508561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.508569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.508876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.508882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.509165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.509179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.509489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.509497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.509804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.509812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.510137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.510144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.510476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.510483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.510704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.510711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.511011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.511018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.511184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.511191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.511386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.511395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.511575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.511582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.511893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.511899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.137 qpair failed and we were unable to recover it. 00:29:43.137 [2024-06-10 14:38:20.512096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.137 [2024-06-10 14:38:20.512103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.512487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.512494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.512766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.512773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.513104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.513111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.513427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.513434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.513770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.513776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.514072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.514079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.514406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.514414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.514730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.514737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.514952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.514959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.515264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.515272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.515311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.515321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.515609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.515617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.515945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.515952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.516267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.516276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.516597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.516606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.516921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.516929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.516977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.516984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.517295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.517302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.517611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.517619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.517933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.517941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.518254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.518262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.518602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.518610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.518798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.518805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.519083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.519090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.519279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.519288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.519630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.519638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.519799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.519808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.520099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.520106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.520422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.520429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.520763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.520772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.521144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.521151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.521313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.521324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.521627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.138 [2024-06-10 14:38:20.521635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.138 qpair failed and we were unable to recover it. 00:29:43.138 [2024-06-10 14:38:20.521947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.521954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.522268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.522275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.522587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.522594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.522888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.522896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.523216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.523222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.523400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.523406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.523644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.523651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.523909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.523916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.524213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.524220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.524487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.524495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.524790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.524796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.525129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.525136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.525296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.525304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.525631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.525639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.525943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.525951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.526116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.526125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.526300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.526307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.526474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.526481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.526755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.526763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.527066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.527073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.527293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.527301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.527619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.527626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.527940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.527948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.528222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.528229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.528512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.528520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.528678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.528686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.528998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.529005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.529229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.139 [2024-06-10 14:38:20.529237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.139 qpair failed and we were unable to recover it. 00:29:43.139 [2024-06-10 14:38:20.529618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.529626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.529937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.529945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.530239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.530246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.530412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.530419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.530728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.530736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.531034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.531042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.531198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.531205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.531491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.531497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.531842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.531850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.532157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.532164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.532325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.532332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.532617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.532623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.532835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.532842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.533161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.533168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.533473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.533480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.533658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.533665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.533897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.533903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.534246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.534254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.534587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.534594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.534895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.534901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.535264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.535271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.535588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.535595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.535885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.535892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.536248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.536255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.536437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.536445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.536737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.536744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.537050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.537057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.537383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.537390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.537712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.537718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.538035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.538041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.538330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.538336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.538649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.538656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.538953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.140 [2024-06-10 14:38:20.538960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.140 qpair failed and we were unable to recover it. 00:29:43.140 [2024-06-10 14:38:20.539168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.539175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.539484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.539490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.539686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.539693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.539849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.539856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.540134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.540141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.540303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.540310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.540622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.540630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.540945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.540952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.541290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.541626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.541634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.541945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.541952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.542248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.542255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.542572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.542580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.542788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.542795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.543101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.543109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.543419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.543426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.543602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.543609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.543770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.543776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.543946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.543953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.544290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.544297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.544464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.544471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.544765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.544772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.545095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.545102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.545439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.545446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.545623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.545630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.545961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.545968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.546298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.546305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.546484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.546491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.546765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.546771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.547086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.547093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.547409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.547416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.547721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.547728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.548051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.548057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.548353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.141 [2024-06-10 14:38:20.548360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.141 qpair failed and we were unable to recover it. 00:29:43.141 [2024-06-10 14:38:20.548557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.548564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.548786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.548792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.549087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.549094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.549387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.549394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.549733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.549740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.549906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.549913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.550064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.550071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.550361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.550367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.550717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.550725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.551041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.551047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.551344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.551351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.551534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.551541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.551814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.551820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.552134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.552140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.552426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.552433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.552780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.552787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.552982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.552989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.553327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.553334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.553645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.553652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.553949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.553955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.554271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.554277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.554575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.554582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.554993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.555000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.555345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.555352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.555651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.555657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.555967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.555974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.556290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.556297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.556612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.556619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.556914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.556921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.557252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.557258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.557435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.557443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.557769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.557776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.558089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.558095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.558326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.142 [2024-06-10 14:38:20.558333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.142 qpair failed and we were unable to recover it. 00:29:43.142 [2024-06-10 14:38:20.558652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.558659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.558851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.558859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.559086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.559093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.559361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.559369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.559676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.559682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.559856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.559862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.560169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.560176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.560576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.560583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.560860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.560867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.561261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.561558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.561565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.561604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.561611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.561912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.561918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.562268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.562275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.562480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.562489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.562884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.562890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.563182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.563190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.563486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.563492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.563856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.563863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.564050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.564057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.564250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.564257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.564581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.564588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.564765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.564772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.564960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.564966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.565368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.565376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.565547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.565554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.565748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.565756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.565913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.565921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.566094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.566101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.566380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.566387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.566425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.566432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.566655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.566662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.566841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.566847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.567178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-06-10 14:38:20.567184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-06-10 14:38:20.567486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.567492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.567803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.567809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.568130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.568136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.568474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.568480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.568773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.568779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.569111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.569117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.569293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.569299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.569699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.569706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.570021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.570027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.570339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.570346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.570523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.570529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.570848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.570855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.571054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.571061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.571220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.571227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.571621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.571628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.571799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.571805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.572091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.572098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.572275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.572283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.572594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.572600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.572960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.572966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.573281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.573289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.573457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.573464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.573504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.573511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.573838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.573845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.574184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.574191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.574519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.574526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.574871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.574878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.575173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.575179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.575487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-06-10 14:38:20.575494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-06-10 14:38:20.575821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.575827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.576145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.576152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.576337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.576344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.576537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.576544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.576841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.576848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.577035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.577042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.577341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.577348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.577467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.577473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.577741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.577748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.578049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.578055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.578346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.578353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.578579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.578585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.578871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.578877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.579185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.579192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.579390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.579398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.579439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.579446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.579751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.579758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.580071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.580078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.580242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.580250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.580569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.580576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.580853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.580860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.581025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.581032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.581301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.581308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.581642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.581649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.581950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.581957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.582234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.582241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.582559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.582566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.582730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.582737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.582922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.582930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.583131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.583138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.583418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.583425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.583734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.583742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.584067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-06-10 14:38:20.584073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-06-10 14:38:20.584268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.584275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.584593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.584599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.584914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.584921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.585245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.585252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.585432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.585439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.585632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.585639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.585809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.585816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.585893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.585900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.586210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.586217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.586530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.586538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.586702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.586710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.587015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.587022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.587203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.587210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.587471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.587478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.587800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.587807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.588133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.588140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.588457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.588464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.588662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.588670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.588963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.588971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.589289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.589296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.589627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.589634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.590026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.590033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.590338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.590346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.590449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.590455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.590746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.590752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.590931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.590939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.591184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.591191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.591590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.591597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.591885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.591892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.591975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.591981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.592133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.592140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.592472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.592479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.592790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.592797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-06-10 14:38:20.593104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-06-10 14:38:20.593110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.593410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.593416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.593605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.593612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.593928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.593934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.594126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.594132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.594456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.594465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.594630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.594637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.595035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.595041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.595353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.595360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.595669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.595676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.595965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.595972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.596285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.596292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.596481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.596489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.596795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.596802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.597123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.597130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.597499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.597506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.597806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.597813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.598117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.598123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.598291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.598298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.598665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.598672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.598973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.598989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.599191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.599197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.599478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.599484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.599647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.599654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.599969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.599975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.600268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.600275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.600435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.600443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.600727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.600734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.601046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.601053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.601357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.601364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.601674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.601680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.602009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.602015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.602358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.602365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.602545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.602552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-06-10 14:38:20.602769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-06-10 14:38:20.602775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.603086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.603092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.603264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.603271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.603551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.603558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.603738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.603744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.604019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.604026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.604350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.604357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.604541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.604548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.604824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.604831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.605146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.605153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.605457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.605464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.605650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.605659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.605963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.605970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.606154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.606162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.606447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.606461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.606645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.606651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.606942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.606950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.607255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.607262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.607564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.607572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.607888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.607895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.608203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.608210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.608542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.608548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.608825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.608831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.609165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.609172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.609501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.609508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.609804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.609811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.609994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.610000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.610290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.610296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.610636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.610643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.610947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.610953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.611271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.611278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.611492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.611499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.611690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.611696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.612056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-06-10 14:38:20.612062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-06-10 14:38:20.612390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.612396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.612701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.612708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.613022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.613029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.613251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.613258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.613585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.613592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.613785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.613792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.614094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.614101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.614144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.614150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.614503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.614510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.614688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.614694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.614882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.614888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.615205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.615212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.615542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.615549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.615727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.615733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.615951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.615958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.616153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.616159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.616478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.616485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.616661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.616670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.616971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.616978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.617278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.617285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.617457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.617464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.617735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.617742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.617962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.617969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.618323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.618330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.618703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.618709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.618873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.618879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.619165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.619171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.619457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.619463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.619676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-06-10 14:38:20.619683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-06-10 14:38:20.619865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.619872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.620186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.620193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.620526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.620533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.620843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.620850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.621164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.621171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.621487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.621494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.621671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.621678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.621928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.621935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.622253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.622259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.622437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.622445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.622746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.622752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.623100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.623107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.623434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.623441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.623791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.623797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.624113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.624120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.624338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.624345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.624668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.624674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.624855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.624862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.625172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.625180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.625576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.625583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.625897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.625904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.626099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.626106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.626397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.626405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.626730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.626737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.627023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.627030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.627342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.627349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.627621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.627629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.627984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.627991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.628302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.628311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.628627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.628634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.629028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.629035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.629339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.629346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.629670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.629677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-06-10 14:38:20.629979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-06-10 14:38:20.629986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.630302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.630309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.630616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.630623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.630917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.630924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.631238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.631245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.631548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.631555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.631845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.631853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.632114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.632122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.632437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.632444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.632737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.632744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.633060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.633067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.633225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.633233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.633386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.633394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.633682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.633689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.634054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.634061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.634372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.634379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.634601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.634608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.634924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.634931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.634970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.634976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.635279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.635286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.635613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.635621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.635850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.635858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.636171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.636179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.636370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.636378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.636739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.636746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.636953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.636960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.637177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.637184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.637372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.637380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.637548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.637556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.637751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.637759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.638096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.638103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.638421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.638428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.638760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.638767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.638978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.638984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-06-10 14:38:20.639115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-06-10 14:38:20.639122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.639395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.639404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.639590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.639597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.639903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.639910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.640226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.640233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.640555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.640562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.640695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.640701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.640978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.640985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.641252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.641259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.641560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.641567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.641890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.641896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.642225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.642231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.642535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.642542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.642762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.642769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.643117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.643124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.643461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.643468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.643801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.643807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.644002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.644009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.644196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.644202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.644557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.644565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.644900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.644907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.645100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.645107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.645359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.645366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.645662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.645669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.645955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.645962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.646264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.646271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.646434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.646441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.646628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.646635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.646897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.646904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.647218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.647225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.647549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.647555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.647867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.647873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.648198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.648205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.648557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.648563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-06-10 14:38:20.648900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-06-10 14:38:20.648906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.649067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.649074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.649267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.649273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.649466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.649473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.649805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.649812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.650111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.650117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.650449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.650456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.650772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.650779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.651177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.651184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.651420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.651427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.651751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.651757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.652146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.652152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.652474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.652480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.652799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.652805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.652982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.652990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.653297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.653304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.653492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.653499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.653822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.653829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.654157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.654164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.654476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.654484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.654787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.654795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.654965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.654973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.655283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.655290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.655596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.655604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.655916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.655924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.656103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.656111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.656335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.656342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.656481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.656488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.656798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.656805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.656964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.656971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.657345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.657352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.657674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.657681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.657985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.657992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.658172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-06-10 14:38:20.658179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-06-10 14:38:20.658435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.658443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.658743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.658750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.659025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.659032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.659340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.659347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.659664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.659671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.660001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.660007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.660341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.660349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.660720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.660729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.661149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.661156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.661456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.661462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.661807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.661813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.662207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.662523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.662531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.662872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.662880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.663202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.663209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.663543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.663551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.663847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.663854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.664167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.664175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.664383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.664390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.664732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.664740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.665046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.665053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.665242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.665249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.665563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.665570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.665791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.665798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.666000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.666007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.666356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.666363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.666540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.666547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.666825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.666832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.666893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-06-10 14:38:20.666899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-06-10 14:38:20.667229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.667236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.667565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.667572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.667866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.667880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.668194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.668201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.668517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.668523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.668840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.668846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.669030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.669036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.669247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.669254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.669543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.669550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.669616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.669623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.669913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.669920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.670196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.670203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.670494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.670502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.670687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.670694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.670970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.670977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.671376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.671384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.671706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.671712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.672021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.672029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.672330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.672337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.672647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.672656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.672955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.672963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.673266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.673273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.673606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.673613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.673932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.673940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.674281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.674290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.674472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.674480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.674780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.674787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.675043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.675050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.675234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.675242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.675455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.675462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.675772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.675780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.676068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.676076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.676249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.676257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-06-10 14:38:20.676556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-06-10 14:38:20.676563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.676879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.676886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.677085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.677091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.677264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.677270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.677561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.677568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.677744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.677750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.677939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.677946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.678262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.678269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.678435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.678441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.678639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.678645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.678850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.678856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.679085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.679092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.679434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.679440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.679478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.679485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.679814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.679820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.679987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.679993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.680275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.680281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.680358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.680365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.680646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.680652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.681009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.681015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.681164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.681172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.681485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.681493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.681819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.681825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.682092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.682099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.682438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.682445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.682747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.682753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.683105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.683113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.683421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.683428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.683758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.683765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.683993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.684000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.684316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.684324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.684631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.684638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.684952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.684960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.685294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-06-10 14:38:20.685302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-06-10 14:38:20.685475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.685483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.685702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.685709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.686013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.686020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.686255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.686262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.686632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.686640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.686945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.686952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.687256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.687263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.687637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.687645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.687946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.687954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.688275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.688282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.688576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.688583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.688766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.688773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.689054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.689062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.689377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.689384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.689749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.689756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.690071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.690078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.690262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.690270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.690665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.690672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.690994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.691002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.691333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.691340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.691622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.691629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.691958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.691965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.692124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.692131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.692439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.692447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.692721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.692730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.693047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.693055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.693246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.693253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.693451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.693459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.693795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.693802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.694109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.694116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.694441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.694448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.694645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.694652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.694708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.694714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-06-10 14:38:20.695020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-06-10 14:38:20.695027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.158 [2024-06-10 14:38:20.695187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-06-10 14:38:20.695194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-06-10 14:38:20.695382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-06-10 14:38:20.695389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-06-10 14:38:20.695554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-06-10 14:38:20.695561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-06-10 14:38:20.695877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-06-10 14:38:20.695884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.696088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.696096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.696285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.696294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.696646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.696653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.696970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.696977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.697282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.697289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.697450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.697458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.697810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.697817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.698205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.698212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.698540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.698547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.698865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.698872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.699201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.699208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.699402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.699409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.699732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.699739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.700056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.700064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.700368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.700375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.700731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.700738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.701029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.701036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.701351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.701359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.701575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.701582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.701875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.701881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.702175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.702182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.702484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.702491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.702895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.702902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.433 [2024-06-10 14:38:20.703103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.433 [2024-06-10 14:38:20.703109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.433 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.703415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.703427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.703789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.703796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.704009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.704019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.704348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.704355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.704699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.704705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.705030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.705037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.705172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.705178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.705522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.705529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.705851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.705858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.706179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.706186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.706491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.706499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.706849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.706856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.707176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.707183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.707418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.707424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.707612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.707618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.707878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.707885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.708234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.708241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.708553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.708560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.708857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.708864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.709063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.709070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.709259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.709266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.709556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.709563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.709875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.709882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.710043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.710049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.710322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.710329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.710659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.710666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.710847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.710854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.711076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.711083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.711388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.711396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.711606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.711613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.711803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.711811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.712001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.712008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.434 [2024-06-10 14:38:20.712189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.434 [2024-06-10 14:38:20.712196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.434 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.712495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.712503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.712847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.712854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.713201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.713209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.713508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.713516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.713857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.713864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.714206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.714213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.714486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.714494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.714833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.714841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.715016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.715023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.715207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.715217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.715534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.715542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.715906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.715913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.716273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.716280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.716457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.716464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.716747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.716754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.717095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.717102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.717438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.717445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.717658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.717664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.717937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.717943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.718124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.718130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.718514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.718521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.718835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.718842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.719133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.719147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.719457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.719464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.719781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.719788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.720127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.720134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.720423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.720431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.720718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.720724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.721061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.721068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.721245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.721252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.721437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.721444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.721768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.721775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.435 [2024-06-10 14:38:20.722189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.435 [2024-06-10 14:38:20.722197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.435 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.722503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.722511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.722818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.722826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.723131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.723138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.723449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.723456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.723811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.723818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.724136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.724143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.724437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.724444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.724746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.724758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.725959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.725966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.726271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.726279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.726320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.726327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.726561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.726570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.726743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.726751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.727066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.727073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.727387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.727394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.727567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.727573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.727866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.727872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.728177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.728192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.728483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.728491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.728710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.728716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.728757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.728764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.729121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.729128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.729518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.729526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.729701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.729709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.730027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.730034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.730225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.730232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.730598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.730604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.730926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.730932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.436 qpair failed and we were unable to recover it. 00:29:43.436 [2024-06-10 14:38:20.731261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.436 [2024-06-10 14:38:20.731268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.731577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.731585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.731859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.731866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.732045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.732053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.732361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.732368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.732670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.732677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.732997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.733004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.733225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.733232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.733519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.733526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.733817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.733824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.734165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.734171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.734354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.734361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.734636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.734643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.735001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.735007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.735297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.735304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.735613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.735620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.735779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.735786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.736011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.736018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.736328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.736336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.736667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.736674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.736878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.736885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.737203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.737210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.737544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.737551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.737832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.737841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.738237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.738244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.738560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.738567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.738861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.738868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.739028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.739034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.739258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.739265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.739557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.739565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.739926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.739933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.740256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.740263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.740442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.740449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.740663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.740670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.437 qpair failed and we were unable to recover it. 00:29:43.437 [2024-06-10 14:38:20.740984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.437 [2024-06-10 14:38:20.740992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.741172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.741180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.741490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.741497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.741660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.741667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.742009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.742017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.742350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.742357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.742678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.742685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.742996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.743003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.743165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.743173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.743327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.743334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.743626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.743633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.743786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.743792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.744086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.744093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.744387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.744395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.744757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.744765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.744930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.744938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.745122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.745130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.745435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.745442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.745748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.745761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.746069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.746075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.746209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.746215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.746590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.746683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.747084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.747118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.747377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.747420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd79c000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.438 [2024-06-10 14:38:20.747783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.438 [2024-06-10 14:38:20.747791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.438 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.748089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.748097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.748403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.748410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.748723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.748729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.748904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.748911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.749207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.749216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.749532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.749540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.749704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.749712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.750002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.750010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.750348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.750356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.750727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.750734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.751015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.751022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.751351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.751358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.751674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.751682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.751998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.752006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.752345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.752352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.752536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.752543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.752839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.752847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.753199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.753484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.753492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.753671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.753679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.753962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.753969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.754148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.754156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.754468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.754476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.754810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.754817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.755133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.755140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.755447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.755454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.755616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.755624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.756016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.756023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.756325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.756333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.756516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.756524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.756704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.756712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-06-10 14:38:20.756995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-06-10 14:38:20.757003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.757321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.757329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.757634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.757641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.757793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.757800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.758046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.758053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.758121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.758128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.758439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.758447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.758754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.758761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.758932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.758939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.759231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.759238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.759531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.759539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.759605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.759611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.759918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.759925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.760259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.760270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.760515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.760523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.760828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.760836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.761190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.761198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.761525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.761533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.761870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.761878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.762216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.762223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.762539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.762547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.762862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.762870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.763179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.763186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.763355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.763363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.763541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.763548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.763851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.763858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.764180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.764188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.764599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.764607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.764828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.764835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.765029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.765036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.765196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.765204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.765521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.765528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.765828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-06-10 14:38:20.765836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-06-10 14:38:20.765992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.765999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.766286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.766294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.766577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.766584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.766902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.766909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.767208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.767215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.767392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.767399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.767659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.767666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.767959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.767966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.768137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.768144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.768528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.768536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.768873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.768880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.769224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.769232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.769391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.769399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.769691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.769698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.769862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.769869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.770162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.770170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.770469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.770477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.770778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.770785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.770966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.770973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.771230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.771238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.771394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.771403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.771804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.771811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.771937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.771945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.772261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.772269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.772581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.772589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.772886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.772894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.773206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.773214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.773553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.773560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.773857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.773863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.774040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.774047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.774203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.774210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.774516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-06-10 14:38:20.774523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-06-10 14:38:20.774856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.774863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.775181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.775189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.775524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.775531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.775857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.775864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.776198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.776205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.776243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.776250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.776510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.776823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.776831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.776983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.776989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.777288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.777295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.777494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.777502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.777825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.777832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.777990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.777997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.778281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.778288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.778464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.778472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.778832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.778840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.779148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.779156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.779529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.779537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.779845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.779853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.780150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.780158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.780469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.780477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.780806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.780814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.781145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.781151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.781190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.781197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.781507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.781514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.781819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.781827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.782107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.782115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.782312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.782323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.782620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.782629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.782928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.782935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.783174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.783181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.783487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.783494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-06-10 14:38:20.783663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-06-10 14:38:20.783670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.783892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.783904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.784086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.784093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.784263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.784270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.784441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.784448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.784796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.784803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.784963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.784970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.785281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.785288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.785464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.785472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.785834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.785841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.786159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.786166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.786384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.786392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.786722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.786729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.786940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.786947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.787241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.787249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.787578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.787586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.787902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.787910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.788222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.788230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.788532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.788540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.788880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.788888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.789175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.789183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.789562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.789570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.789683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.789690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.790033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.790041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.790223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.790230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.790448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.790455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.790843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.790851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.791150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-06-10 14:38:20.791157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-06-10 14:38:20.791567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.791573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.791880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.791886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.792256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.792263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.792563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.792570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.792740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.792747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.793026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.793033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.793353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.793360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.793698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.793705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.794045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.794054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.794365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.794372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.794548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.794556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.794864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.794870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.795043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.795050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.795345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.795352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.795675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.795681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.796033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.796040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.796225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.796232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.796449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.796456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.796785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.796792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.797010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.797017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.797101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.797107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.797427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.797434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.797737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.797744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.797918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.797925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.798112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.798119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.798484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.798491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.798805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.798813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.799119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.799126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.799429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.799437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.799744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.799751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.800063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.800070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.800258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.800265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.800633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-06-10 14:38:20.800640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-06-10 14:38:20.800831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.800838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.801129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.801136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.801344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.801351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.801573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.801579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.801883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.801890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.802203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.802209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.802380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.802386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.802669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.802676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.803008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.803014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.803301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.803309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.803675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.803681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.804043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.804049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.804325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.804332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.804508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.804515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.804814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.804820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.805110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.805118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.805420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.805427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.805779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.805786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.806088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.806095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.806278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.806285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.806435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.806443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.806617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.806625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.806806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.806814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.807096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.807104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.807426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.807433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.807768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.807776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.808080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.808087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.808405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.808413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.808731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.808738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.809053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.809060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.809374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.809382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.809705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.809711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.809874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.809881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-06-10 14:38:20.810150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-06-10 14:38:20.810165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.810450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.810457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.810778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.810784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.811113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.811119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.811522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.811529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.811832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.811839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.812134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.812141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.812437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.812445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.812624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.812632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.812737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.812743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.812980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.812987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.813166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.813173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.813349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.813355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.813673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.813680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.813974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.813988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.814171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.814177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.814486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.814493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.814820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.814827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.815136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.815143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.815323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.815331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.815633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.815639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.815953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.815959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.815999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.816007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.816304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.816310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.816473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.816480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.816811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.816818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.817153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.817161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.817482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.817489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.817662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.817669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.817977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.817984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.818283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.818291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.818577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.818584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-06-10 14:38:20.818781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-06-10 14:38:20.818789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.818832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.818838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.819001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.819008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.819350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.819357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.819520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.819526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.819835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.819842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.820030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.820037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.820341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.820348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.820387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.820393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.820727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.820734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.821136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.821143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.821448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.821455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.821773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.821780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.822118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.822124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.822434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.822441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.822769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.822776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.447 [2024-06-10 14:38:20.823106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.823113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:43.447 [2024-06-10 14:38:20.823435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.823442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:43.447 [2024-06-10 14:38:20.823649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.823655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:43.447 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.447 [2024-06-10 14:38:20.824021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.824028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.824371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.824377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.824702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.824709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.824948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.824955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.825150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.825158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.825483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.825491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.825672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.825679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.825961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.825967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.826369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.826377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.826690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.826696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.826969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.826976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-06-10 14:38:20.827295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-06-10 14:38:20.827301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.827610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.827618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.827912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.827918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.828245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.828253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.828519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.828526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.828825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.828838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.829194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.829201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.829392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.829399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.829626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.829632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.829930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.829937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.830243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.830249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.830418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.830426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.830471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.830478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.830811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.830818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.831112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.831119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.831400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.831407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.831598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.831605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.831909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.831916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.831983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.831991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.832289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.832298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.832477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.832485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.832786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.832792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.833119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.833126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.833306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.833312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.833623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.833630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.833803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.833812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.834123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.834130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.834348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.834355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.834666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.834674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.834979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.834986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.835299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.835307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.835613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.835621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.835931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.835939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-06-10 14:38:20.836244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-06-10 14:38:20.836254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.836441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.836449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.836739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.836747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.836909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.836918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.837097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.837104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.837390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.837397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.837711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.837717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.837891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.837898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.838052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.838059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.838282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.838289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.838556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.838563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.838921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.838930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.839110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.839117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.839231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.839239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.839642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.839730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.840219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.840253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.840666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.840700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c4290 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.841020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.841028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.841366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.841374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.841738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.841745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.842068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.842075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.842393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.842400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.842716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.842724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.843043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.843050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.843121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.843127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.843391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-06-10 14:38:20.843399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-06-10 14:38:20.843748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.843756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.844098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.844106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.844432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.844439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.844792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.844800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.844978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.844985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.845187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.845194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.845507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.845516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.845724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.845731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.845919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.845927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.846222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.846229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.846422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.846430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.846697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.846704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.847001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.847009] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.847265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.847272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.847562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.847568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.847778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.847785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.848106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.848113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.848428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.848435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.848840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.848847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.849040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.849047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.849415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.849422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.849621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.849628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.849934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.849941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.850145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.850152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.850470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.850478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.850815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.850821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.851108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.851123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.851419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.851427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.851620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.851628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.852017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.852024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.852320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.852328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.852373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-06-10 14:38:20.852380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-06-10 14:38:20.852648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.852656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.852959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.852966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.853258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.853265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.853559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.853568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.853876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.853883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.854040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.854047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.854299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.854307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.854500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.854508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.854684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.854691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.855062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.855069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.855364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.855371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.855667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.855674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.855994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.856001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.856122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.856129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.856354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.856363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.856640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.856647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.856966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.856973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.857292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.857300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.857484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.857492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.857836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.857843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.858013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.858020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.858183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.858191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.858492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.858501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.858804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.858812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.859127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.859134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.859177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.859183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.859342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.859349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.859679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.859687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.859867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.859874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.860093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.860100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.860311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.860321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.860641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.860648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-06-10 14:38:20.860812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-06-10 14:38:20.860819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.861091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.861098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.861440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.861448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.861586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.861592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.861922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.861930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.452 [2024-06-10 14:38:20.862263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.862282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.862621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.862630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.452 [2024-06-10 14:38:20.862830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.862838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.452 [2024-06-10 14:38:20.863206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.863215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.452 [2024-06-10 14:38:20.863386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.863394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.863719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.863726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.863918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.863926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.864191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.864199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.864538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.864546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.864889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.864896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.865190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.865197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.865415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.865423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.865792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.865799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.865986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.865993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.866217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.866224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.866538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.866545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.866882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.866888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.867201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.867208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.867485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.867492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.867818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.867825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.868147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.868153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.868337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.868345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.868490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-06-10 14:38:20.868496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-06-10 14:38:20.868830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.868836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.869132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.869139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.869471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.869477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.869620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.869627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.869988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.869994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.870322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.870328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.870654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.870662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.871005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.871011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.871345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.871354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.871614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.871622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.871963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.871970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.872138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.872146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.872536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.872544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.872840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.872847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.873160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.873167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.873479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.873486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.873776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.873783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.873940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.873948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.874224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.874232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.874415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.874425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.874762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.874769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.875088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.875095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.875411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.875418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.875764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.875771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.876094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.876100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.876265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.876272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.876592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.876599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.876774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.876781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.876980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.876986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.877290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.877298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.877609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.877616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.877828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.877835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.878156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.878163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-06-10 14:38:20.878341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-06-10 14:38:20.878348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 Malloc0 00:29:43.453 [2024-06-10 14:38:20.878793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.878800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.879093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.879100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.879144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.879151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.454 [2024-06-10 14:38:20.879413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.879420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:43.454 [2024-06-10 14:38:20.879732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.879739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.454 [2024-06-10 14:38:20.880074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.880082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.454 [2024-06-10 14:38:20.880398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.880406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.880621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.880628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.880961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.880968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.881302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.881309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.881644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.881653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.881971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.881978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.882256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.882263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.882670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.882677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.882884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.882891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.883216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.883224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.883399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.883406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.883701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.883709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.884020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.884027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.884342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.884350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.884582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.884590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.884910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.884917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.885076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.885084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.885418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.885425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.885718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-06-10 14:38:20.885725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-06-10 14:38:20.885965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.454 [2024-06-10 14:38:20.886030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.886037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.886330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.886336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.886743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.886750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.886953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.886960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.887135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.887142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.887440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.887447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.887751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.887757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.888057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.888063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.888371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.888378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.888680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.888686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.889019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.889026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.889343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.889350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.889668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.889674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.889982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.889988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.890303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.890310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.890575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.890584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.890911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.890918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.891106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.891113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.891390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.891397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.891710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.891717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.891797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.891803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.891946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.891953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.892154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.892162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.892484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.892490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.892689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.892695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.893017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.893024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.893363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.893371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.893691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.893698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.894024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.894031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.894321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.894329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.894640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.894648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-06-10 14:38:20.894854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.894861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.455 [2024-06-10 14:38:20.895176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-06-10 14:38:20.895184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.456 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.456 [2024-06-10 14:38:20.895514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.895522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.456 [2024-06-10 14:38:20.895835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.895842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.456 [2024-06-10 14:38:20.896137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.896144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.896325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.896333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.896720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.896898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.896904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.897196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.897203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.897470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.897477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.897754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.897761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.897944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.897950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.898103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.898109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.898411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.898418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.898744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.898751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.898804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.898810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.899164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.899171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.899471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.899478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.899789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.899795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.899971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.899978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.900295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.900302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.900652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.900658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.900976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.900982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.901280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.901288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.901478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.901486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.901776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.901783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.902136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.902441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.902448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.902760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.902767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.903078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.903085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.903245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.903253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.903464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.903472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.903810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.903818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.904003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-06-10 14:38:20.904010] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-06-10 14:38:20.904050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.904057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.904371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.904379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.904537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.904545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.904851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.904858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.905171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.905178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.905365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.905372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.905665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.905672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.905987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.905994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.906212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.906219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.906562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.906569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.906883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.906890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.907050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.907057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.457 [2024-06-10 14:38:20.907437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.907445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.457 [2024-06-10 14:38:20.907741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.907748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.457 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.457 [2024-06-10 14:38:20.908070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.908077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.908414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.908420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.908734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.908740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.909129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.909136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.909450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.909456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.909762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.909769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.910089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.910095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.910259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.910266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.910569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.910582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.910907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.910914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.911203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.911210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.911502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.911509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.911853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.911860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.912019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.912026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.912385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.912391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.912723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.912730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.913068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.913075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.913396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.913404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-06-10 14:38:20.913631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-06-10 14:38:20.913638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.913906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.913913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.914115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.914121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.914184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.914190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.914494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.914500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.914683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.914690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.914998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.915005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.915057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.915063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.915342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.915348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.915661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.915668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.915959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.915967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.916362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.916369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.916680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.916686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.916883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.916889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.917260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.917266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.917588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.917595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.917909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.917916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.918213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.918220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.918546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.918552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.918896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.918903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.458 [2024-06-10 14:38:20.919116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.919124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.458 [2024-06-10 14:38:20.919441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.919448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.458 [2024-06-10 14:38:20.919786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.919793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.458 [2024-06-10 14:38:20.920108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.920115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.920505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.920511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.920797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.920803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.921145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.921152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.921368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.921374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.921554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.921561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.921868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.921876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.922055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.922062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.922296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-06-10 14:38:20.922302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-06-10 14:38:20.922583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.922590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.922900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.922907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.923198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.923206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.923402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.923409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.923596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.923603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.923886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.923893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.924188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.924195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.924554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.924561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.924871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.924877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.925201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.925207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.925385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.925392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.925577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.925584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.925883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.925889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.926227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-06-10 14:38:20.926234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd7a4000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.926230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.459 [2024-06-10 14:38:20.936772] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.459 [2024-06-10 14:38:20.936860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.459 [2024-06-10 14:38:20.936874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.459 [2024-06-10 14:38:20.936880] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.459 [2024-06-10 14:38:20.936884] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.459 [2024-06-10 14:38:20.936899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.459 14:38:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3222674 00:29:43.459 [2024-06-10 14:38:20.946765] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.459 [2024-06-10 14:38:20.946831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.459 [2024-06-10 14:38:20.946843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.459 [2024-06-10 14:38:20.946848] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.459 [2024-06-10 14:38:20.946852] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.459 [2024-06-10 14:38:20.946863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.956761] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.459 [2024-06-10 14:38:20.956809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.459 [2024-06-10 14:38:20.956820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.459 [2024-06-10 14:38:20.956827] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.459 [2024-06-10 14:38:20.956832] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.459 [2024-06-10 14:38:20.956843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.966754] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.459 [2024-06-10 14:38:20.966811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.459 [2024-06-10 14:38:20.966823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.459 [2024-06-10 14:38:20.966828] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.459 [2024-06-10 14:38:20.966832] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.459 [2024-06-10 14:38:20.966842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-06-10 14:38:20.976685] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.459 [2024-06-10 14:38:20.976754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.460 [2024-06-10 14:38:20.976766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.460 [2024-06-10 14:38:20.976770] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.460 [2024-06-10 14:38:20.976775] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.460 [2024-06-10 14:38:20.976784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-06-10 14:38:20.986771] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.460 [2024-06-10 14:38:20.986815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.460 [2024-06-10 14:38:20.986826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.460 [2024-06-10 14:38:20.986831] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.460 [2024-06-10 14:38:20.986835] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.460 [2024-06-10 14:38:20.986845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-06-10 14:38:20.996777] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.460 [2024-06-10 14:38:20.996823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.460 [2024-06-10 14:38:20.996833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.460 [2024-06-10 14:38:20.996838] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.460 [2024-06-10 14:38:20.996842] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.460 [2024-06-10 14:38:20.996852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-06-10 14:38:21.006818] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.460 [2024-06-10 14:38:21.006912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.460 [2024-06-10 14:38:21.006923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.460 [2024-06-10 14:38:21.006927] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.460 [2024-06-10 14:38:21.006931] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.460 [2024-06-10 14:38:21.006941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.016830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.016887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.016899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.016903] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.016908] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.016918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.026878] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.026925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.026936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.026941] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.026945] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.026955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.036883] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.036935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.036945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.036950] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.036954] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.036964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.046939] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.046988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.047001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.047006] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.047010] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.047020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.056927] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.056978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.056988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.056993] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.056997] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.057007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.066982] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.067047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.067058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.067062] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.723 [2024-06-10 14:38:21.067067] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.723 [2024-06-10 14:38:21.067076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.723 qpair failed and we were unable to recover it. 00:29:43.723 [2024-06-10 14:38:21.077035] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.723 [2024-06-10 14:38:21.077088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.723 [2024-06-10 14:38:21.077099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.723 [2024-06-10 14:38:21.077103] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.077108] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.077117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.086923] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.086969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.086979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.086984] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.086988] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.087002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.096934] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.097001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.097012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.097017] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.097021] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.097031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.107097] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.107181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.107199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.107204] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.107209] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.107223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.117119] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.117165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.117177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.117182] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.117187] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.117197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.127116] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.127168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.127179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.127184] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.127188] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.127198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.137203] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.137256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.137270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.137275] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.137279] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.137289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.147095] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.147139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.147150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.147154] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.147158] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.147168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.157254] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.157299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.157310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.157319] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.157323] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.157333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.167122] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.167166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.167176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.167181] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.167186] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.167195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.177391] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.177493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.177504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.177508] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.177515] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.177525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.187370] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.187422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.187433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.724 [2024-06-10 14:38:21.187438] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.724 [2024-06-10 14:38:21.187442] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.724 [2024-06-10 14:38:21.187452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.724 qpair failed and we were unable to recover it. 00:29:43.724 [2024-06-10 14:38:21.197392] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.724 [2024-06-10 14:38:21.197435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.724 [2024-06-10 14:38:21.197445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.197450] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.197454] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.197464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.207401] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.207473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.207483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.207488] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.207492] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.207502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.217406] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.217456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.217467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.217472] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.217476] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.217485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.227414] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.227473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.227484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.227489] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.227493] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.227503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.237458] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.237503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.237513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.237518] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.237522] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.237532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.247457] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.247501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.247512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.247516] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.247520] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.247530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.257497] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.257548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.257559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.257563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.257567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.257577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.267497] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.267545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.267556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.267563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.267567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.267577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.277513] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.277555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.277565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.277570] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.277574] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.277584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.287574] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.287667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.287677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.287682] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.287686] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.287696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.297599] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.725 [2024-06-10 14:38:21.297689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.725 [2024-06-10 14:38:21.297700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.725 [2024-06-10 14:38:21.297704] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.725 [2024-06-10 14:38:21.297708] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.725 [2024-06-10 14:38:21.297718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.725 qpair failed and we were unable to recover it. 00:29:43.725 [2024-06-10 14:38:21.307514] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.726 [2024-06-10 14:38:21.307564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.726 [2024-06-10 14:38:21.307577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.726 [2024-06-10 14:38:21.307581] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.726 [2024-06-10 14:38:21.307585] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.726 [2024-06-10 14:38:21.307595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.726 qpair failed and we were unable to recover it. 00:29:43.988 [2024-06-10 14:38:21.317644] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.988 [2024-06-10 14:38:21.317688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.988 [2024-06-10 14:38:21.317700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.988 [2024-06-10 14:38:21.317705] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.988 [2024-06-10 14:38:21.317709] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.988 [2024-06-10 14:38:21.317719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.988 qpair failed and we were unable to recover it. 00:29:43.988 [2024-06-10 14:38:21.327686] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.327734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.327744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.327749] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.327753] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.327763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.337726] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.337778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.337788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.337792] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.337796] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.337806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.347693] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.347737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.347747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.347752] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.347756] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.347765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.357760] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.357807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.357817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.357824] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.357829] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.357839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.367750] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.367796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.367806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.367811] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.367815] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.367824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.377858] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.377911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.377921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.377926] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.377930] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.377940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.387838] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.387885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.387896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.387900] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.387904] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.387913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.397864] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.397912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.397923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.397927] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.397931] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.397941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.407777] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.407823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.407834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.407838] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.407842] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.407852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.417895] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.417950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.417961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.417966] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.417970] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.417979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.427940] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.427986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.427997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.428001] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.428006] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.428015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.437987] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.989 [2024-06-10 14:38:21.438032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.989 [2024-06-10 14:38:21.438042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.989 [2024-06-10 14:38:21.438047] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.989 [2024-06-10 14:38:21.438051] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.989 [2024-06-10 14:38:21.438061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.989 qpair failed and we were unable to recover it. 00:29:43.989 [2024-06-10 14:38:21.447997] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.448043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.448056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.448061] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.448065] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.448075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.458065] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.458146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.458156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.458161] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.458165] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.458174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.468052] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.468096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.468106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.468111] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.468115] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.468125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.478083] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.478128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.478139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.478143] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.478147] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.478157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.488107] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.488157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.488167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.488172] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.488176] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.488189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.498115] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.498165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.498176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.498181] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.498185] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.498194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.508153] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.508238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.508248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.508253] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.508257] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.508267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.518190] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.518258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.518268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.518273] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.518277] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.518287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.528189] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.528236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.528247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.528252] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.528256] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.528266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.538250] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.538332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.538346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.538351] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.538355] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.538365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.548274] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.548328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.548341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.548347] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.548352] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.548363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.558297] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.558343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.990 [2024-06-10 14:38:21.558354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.990 [2024-06-10 14:38:21.558358] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.990 [2024-06-10 14:38:21.558362] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.990 [2024-06-10 14:38:21.558372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.990 qpair failed and we were unable to recover it. 00:29:43.990 [2024-06-10 14:38:21.568345] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.990 [2024-06-10 14:38:21.568395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.991 [2024-06-10 14:38:21.568405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.991 [2024-06-10 14:38:21.568410] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.991 [2024-06-10 14:38:21.568414] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.991 [2024-06-10 14:38:21.568424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.991 qpair failed and we were unable to recover it. 00:29:43.991 [2024-06-10 14:38:21.578341] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.991 [2024-06-10 14:38:21.578390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.991 [2024-06-10 14:38:21.578401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.991 [2024-06-10 14:38:21.578406] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.991 [2024-06-10 14:38:21.578412] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:43.991 [2024-06-10 14:38:21.578422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.991 qpair failed and we were unable to recover it. 00:29:44.254 [2024-06-10 14:38:21.588381] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.254 [2024-06-10 14:38:21.588434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.254 [2024-06-10 14:38:21.588444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.254 [2024-06-10 14:38:21.588449] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.254 [2024-06-10 14:38:21.588453] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.254 [2024-06-10 14:38:21.588463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.254 qpair failed and we were unable to recover it. 00:29:44.254 [2024-06-10 14:38:21.598415] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.254 [2024-06-10 14:38:21.598462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.254 [2024-06-10 14:38:21.598472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.254 [2024-06-10 14:38:21.598477] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.254 [2024-06-10 14:38:21.598481] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.254 [2024-06-10 14:38:21.598491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.254 qpair failed and we were unable to recover it. 00:29:44.254 [2024-06-10 14:38:21.608457] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.254 [2024-06-10 14:38:21.608502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.254 [2024-06-10 14:38:21.608512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.254 [2024-06-10 14:38:21.608517] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.254 [2024-06-10 14:38:21.608521] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.254 [2024-06-10 14:38:21.608530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.254 qpair failed and we were unable to recover it. 00:29:44.254 [2024-06-10 14:38:21.618455] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.254 [2024-06-10 14:38:21.618553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.618564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.618568] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.618572] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.618582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.628504] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.628558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.628569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.628573] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.628577] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.628587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.638504] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.638548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.638558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.638563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.638567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.638577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.648570] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.648652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.648662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.648667] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.648671] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.648681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.658590] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.658644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.658654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.658659] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.658663] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.658673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.668644] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.668687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.668697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.668702] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.668709] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.668718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.678643] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.678688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.678698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.678703] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.678707] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.678716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.688692] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.688739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.688749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.688754] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.688758] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.688768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.698713] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.698764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.698775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.698779] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.698784] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.698793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.708724] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.708796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.708806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.708811] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.708815] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.708825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.718627] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.718676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.718687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.718692] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.718696] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.718706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.728801] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.728851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.728861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.728866] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.255 [2024-06-10 14:38:21.728870] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.255 [2024-06-10 14:38:21.728880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.255 qpair failed and we were unable to recover it. 00:29:44.255 [2024-06-10 14:38:21.738701] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.255 [2024-06-10 14:38:21.738753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.255 [2024-06-10 14:38:21.738764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.255 [2024-06-10 14:38:21.738769] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.738773] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.738782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.748839] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.748886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.748897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.748901] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.748905] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.748915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.758878] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.758925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.758935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.758943] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.758947] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.758956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.768898] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.768972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.768983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.768988] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.768992] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.769001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.778832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.778884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.778895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.778900] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.778904] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.778914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.788951] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.788997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.789008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.789013] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.789017] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.789027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.798954] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.798998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.799009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.799014] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.799018] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.799028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.809010] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.809119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.809130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.809135] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.809139] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.809149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.819024] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.819084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.819095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.819100] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.819104] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.819114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.829058] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.829106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.829117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.829122] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.829126] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.829136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.256 [2024-06-10 14:38:21.839086] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.256 [2024-06-10 14:38:21.839128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.256 [2024-06-10 14:38:21.839139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.256 [2024-06-10 14:38:21.839144] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.256 [2024-06-10 14:38:21.839148] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.256 [2024-06-10 14:38:21.839158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.256 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.849110] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.849195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.849211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.849216] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.849220] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.849230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.859137] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.859186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.859197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.859202] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.859206] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.859215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.869155] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.869212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.869223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.869228] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.869232] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.869242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.879190] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.879232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.879243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.879248] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.879252] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.879261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.889198] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.889246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.889257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.889261] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.889266] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.889278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.899207] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.899280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.899291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.899296] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.899300] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.899309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.909155] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.909200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.909211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.909215] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.520 [2024-06-10 14:38:21.909219] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.520 [2024-06-10 14:38:21.909229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.520 qpair failed and we were unable to recover it. 00:29:44.520 [2024-06-10 14:38:21.919306] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.520 [2024-06-10 14:38:21.919358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.520 [2024-06-10 14:38:21.919370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.520 [2024-06-10 14:38:21.919374] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.919379] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.919389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.929338] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.929412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.929423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.929428] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.929432] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.929442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.939358] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.939415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.939428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.939433] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.939437] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.939447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.949405] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.949454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.949465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.949470] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.949474] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.949484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.959384] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.959429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.959440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.959445] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.959449] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.959459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.969421] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.969472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.969483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.969488] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.969492] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.969502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.979466] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.979519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.979530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.979534] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.979541] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.979551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.989520] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.989598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.989608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.989613] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.989617] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.989626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:21.999496] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:21.999547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:21.999558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:21.999563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:21.999567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:21.999576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:22.009553] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:22.009602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:22.009612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:22.009617] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:22.009621] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:22.009631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:22.019597] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:22.019681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:22.019692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:22.019696] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:22.019700] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:22.019710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.521 [2024-06-10 14:38:22.029582] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.521 [2024-06-10 14:38:22.029642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.521 [2024-06-10 14:38:22.029652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.521 [2024-06-10 14:38:22.029657] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.521 [2024-06-10 14:38:22.029661] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.521 [2024-06-10 14:38:22.029670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.521 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.039629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.039675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.039685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.039690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.039694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.039704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.049670] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.049718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.049728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.049733] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.049737] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.049747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.059705] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.059760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.059770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.059775] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.059779] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.059788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.069723] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.069774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.069784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.069788] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.069795] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.069805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.079749] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.079793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.079803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.079808] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.079812] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.079822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.089750] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.089797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.089807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.089812] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.089816] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.089826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.099848] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.099912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.099922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.099927] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.099931] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.099940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.522 [2024-06-10 14:38:22.109840] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.522 [2024-06-10 14:38:22.109887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.522 [2024-06-10 14:38:22.109897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.522 [2024-06-10 14:38:22.109903] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.522 [2024-06-10 14:38:22.109907] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.522 [2024-06-10 14:38:22.109917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.522 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.119864] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.119913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.119924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.119929] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.119933] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.119942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.784 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.129934] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.129983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.129994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.129998] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.130003] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.130012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.784 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.139920] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.140009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.140019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.140024] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.140028] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.140038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.784 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.150007] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.150050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.150060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.150065] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.150069] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.150080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.784 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.159973] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.160020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.160031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.160038] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.160042] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.160052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.784 qpair failed and we were unable to recover it. 00:29:44.784 [2024-06-10 14:38:22.170010] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.784 [2024-06-10 14:38:22.170054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.784 [2024-06-10 14:38:22.170065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.784 [2024-06-10 14:38:22.170069] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.784 [2024-06-10 14:38:22.170073] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.784 [2024-06-10 14:38:22.170083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.180031] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.180081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.180091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.180096] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.180100] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.180109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.190066] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.190150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.190168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.190173] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.190178] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.190191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.199970] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.200026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.200037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.200043] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.200047] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.200057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.210125] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.210220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.210232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.210237] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.210241] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.210251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.220158] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.220212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.220223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.220228] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.220232] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.220242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.230173] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.230219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.230230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.230235] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.230239] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.230248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.240208] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.240258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.240269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.240274] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.240278] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.240287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.250236] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.250285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.250299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.250304] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.250308] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.250321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.260259] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.260321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.260332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.260336] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.260340] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.260350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.270281] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.270362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.270379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.270383] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.270388] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.270398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.280338] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.280386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.280396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.280401] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.280405] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.280415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.290351] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.290440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.290450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.290455] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.290459] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.290472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.300381] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.785 [2024-06-10 14:38:22.300432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.785 [2024-06-10 14:38:22.300443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.785 [2024-06-10 14:38:22.300448] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.785 [2024-06-10 14:38:22.300452] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.785 [2024-06-10 14:38:22.300461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.785 qpair failed and we were unable to recover it. 00:29:44.785 [2024-06-10 14:38:22.310379] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.310481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.310492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.310496] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.310501] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.310510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.320432] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.320482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.320493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.320498] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.320502] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.320512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.330344] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.330393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.330404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.330409] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.330413] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.330423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.340489] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.340543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.340557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.340562] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.340566] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.340576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.350533] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.350619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.350629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.350634] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.350638] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.350648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.360549] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.360593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.360603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.360608] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.360612] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.360621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:44.786 [2024-06-10 14:38:22.370584] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.786 [2024-06-10 14:38:22.370675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.786 [2024-06-10 14:38:22.370685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.786 [2024-06-10 14:38:22.370690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.786 [2024-06-10 14:38:22.370694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:44.786 [2024-06-10 14:38:22.370703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.786 qpair failed and we were unable to recover it. 00:29:45.048 [2024-06-10 14:38:22.380618] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.048 [2024-06-10 14:38:22.380665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.048 [2024-06-10 14:38:22.380676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.048 [2024-06-10 14:38:22.380681] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.048 [2024-06-10 14:38:22.380685] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.380697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.390635] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.390684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.390695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.390699] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.390703] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.390713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.400658] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.400717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.400728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.400732] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.400736] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.400746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.410688] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.410735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.410745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.410749] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.410754] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.410763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.420748] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.420816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.420826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.420831] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.420835] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.420845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.430748] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.430794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.430804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.430809] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.430813] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.430823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.440788] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.440835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.440845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.440850] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.440854] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.440863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.450697] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.450747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.450757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.450761] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.450766] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.450775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.460815] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.460867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.460877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.460882] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.460886] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.460895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.470846] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.470890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.470900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.470905] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.470912] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.470921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.480898] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.480940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.480951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.480956] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.480960] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.480969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.490939] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.049 [2024-06-10 14:38:22.491013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.049 [2024-06-10 14:38:22.491023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.049 [2024-06-10 14:38:22.491027] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.049 [2024-06-10 14:38:22.491032] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.049 [2024-06-10 14:38:22.491041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.049 qpair failed and we were unable to recover it. 00:29:45.049 [2024-06-10 14:38:22.500832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.500935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.500945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.500949] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.500953] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.500963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.510985] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.511026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.511037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.511041] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.511046] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.511055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.521018] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.521109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.521126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.521132] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.521137] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.521150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.531029] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.531078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.531090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.531095] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.531099] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.531109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.541046] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.541101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.541112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.541116] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.541121] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.541131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.551088] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.551141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.551160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.551165] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.551170] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.551183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.561035] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.561091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.561111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.561123] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.561128] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.561141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.571194] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.571269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.571288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.571293] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.571298] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.571311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.581179] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.581227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.581239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.581244] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.581248] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.581259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.591198] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.591244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.591255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.591260] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.591264] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.591274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.601233] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.050 [2024-06-10 14:38:22.601283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.050 [2024-06-10 14:38:22.601293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.050 [2024-06-10 14:38:22.601298] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.050 [2024-06-10 14:38:22.601302] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.050 [2024-06-10 14:38:22.601312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.050 qpair failed and we were unable to recover it. 00:29:45.050 [2024-06-10 14:38:22.611253] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.051 [2024-06-10 14:38:22.611333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.051 [2024-06-10 14:38:22.611344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.051 [2024-06-10 14:38:22.611349] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.051 [2024-06-10 14:38:22.611353] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.051 [2024-06-10 14:38:22.611363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.051 qpair failed and we were unable to recover it. 00:29:45.051 [2024-06-10 14:38:22.621280] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.051 [2024-06-10 14:38:22.621373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.051 [2024-06-10 14:38:22.621384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.051 [2024-06-10 14:38:22.621389] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.051 [2024-06-10 14:38:22.621393] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.051 [2024-06-10 14:38:22.621404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.051 qpair failed and we were unable to recover it. 00:29:45.051 [2024-06-10 14:38:22.631168] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.051 [2024-06-10 14:38:22.631216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.051 [2024-06-10 14:38:22.631227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.051 [2024-06-10 14:38:22.631232] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.051 [2024-06-10 14:38:22.631236] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.051 [2024-06-10 14:38:22.631246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.051 qpair failed and we were unable to recover it. 00:29:45.051 [2024-06-10 14:38:22.641334] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.051 [2024-06-10 14:38:22.641378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.051 [2024-06-10 14:38:22.641388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.051 [2024-06-10 14:38:22.641393] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.051 [2024-06-10 14:38:22.641397] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.051 [2024-06-10 14:38:22.641407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.051 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.651370] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.651421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.651431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.651442] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.651446] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.651456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.661255] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.661317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.661328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.661333] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.661337] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.661347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.671412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.671456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.671467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.671472] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.671476] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.671485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.681422] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.681467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.681478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.681482] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.681487] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.681497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.691474] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.691523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.691534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.691539] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.691543] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.691553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.701481] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.701535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.701545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.701550] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.701555] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.701564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.711541] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.711591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.313 [2024-06-10 14:38:22.711602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.313 [2024-06-10 14:38:22.711606] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.313 [2024-06-10 14:38:22.711610] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.313 [2024-06-10 14:38:22.711620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.313 qpair failed and we were unable to recover it. 00:29:45.313 [2024-06-10 14:38:22.721633] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.313 [2024-06-10 14:38:22.721688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.721698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.721703] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.721707] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.721717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.731585] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.731633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.731643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.731648] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.731652] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.731662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.741628] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.741684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.741697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.741702] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.741706] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.741715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.751617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.751662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.751672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.751677] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.751681] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.751691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.761677] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.761770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.761781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.761785] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.761789] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.761799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.771652] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.771697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.771707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.771712] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.771716] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.771726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.781716] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.781767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.781777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.781782] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.781786] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.781798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.791759] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.791803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.791813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.791818] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.791822] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.791831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.801776] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.801821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.801832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.801836] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.801840] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.801850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.811796] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.811893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.811903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.811908] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.811912] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.811922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.821736] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.821785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.314 [2024-06-10 14:38:22.821796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.314 [2024-06-10 14:38:22.821801] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.314 [2024-06-10 14:38:22.821805] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.314 [2024-06-10 14:38:22.821814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.314 qpair failed and we were unable to recover it. 00:29:45.314 [2024-06-10 14:38:22.831838] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.314 [2024-06-10 14:38:22.831887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.831901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.831905] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.831909] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.831919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.841880] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.841928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.841938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.841943] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.841947] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.841956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.851903] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.851998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.852008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.852013] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.852017] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.852027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.861940] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.861988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.861999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.862003] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.862007] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.862017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.871939] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.871990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.872007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.872013] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.872021] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.872035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.881850] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.881896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.881908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.881913] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.881917] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.881928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.892010] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.892057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.892068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.892073] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.892077] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.892088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.315 [2024-06-10 14:38:22.902072] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.315 [2024-06-10 14:38:22.902146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.315 [2024-06-10 14:38:22.902157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.315 [2024-06-10 14:38:22.902162] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.315 [2024-06-10 14:38:22.902166] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.315 [2024-06-10 14:38:22.902177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.315 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.912078] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.912124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.912136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.912141] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.912145] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.912155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.922109] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.922165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.922183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.922189] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.922194] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.922207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.932132] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.932181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.932193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.932198] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.932202] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.932213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.942169] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.942214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.942225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.942230] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.942234] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.942244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.952188] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.952234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.952244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.952249] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.952253] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.952263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.577 [2024-06-10 14:38:22.962145] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.577 [2024-06-10 14:38:22.962201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.577 [2024-06-10 14:38:22.962211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.577 [2024-06-10 14:38:22.962219] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.577 [2024-06-10 14:38:22.962223] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.577 [2024-06-10 14:38:22.962233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.577 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:22.972220] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:22.972289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:22.972299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:22.972304] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:22.972308] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:22.972322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:22.982322] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:22.982374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:22.982385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:22.982389] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:22.982394] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:22.982404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:22.992299] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:22.992358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:22.992369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:22.992374] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:22.992379] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:22.992388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.002352] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.002395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.002405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.002410] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.002414] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.002424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.012375] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.012427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.012439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.012443] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.012448] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.012458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.022400] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.022461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.022473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.022478] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.022482] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.022492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.032407] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.032455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.032466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.032470] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.032475] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.032484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.042459] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.042571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.042582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.042587] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.042591] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.042601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.052457] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.052507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.052518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.052526] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.052530] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.052540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.062387] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.062439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.062449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.062454] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.062458] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.062467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.072511] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.072567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.072577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.072582] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.578 [2024-06-10 14:38:23.072586] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.578 [2024-06-10 14:38:23.072596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.578 qpair failed and we were unable to recover it. 00:29:45.578 [2024-06-10 14:38:23.082549] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.578 [2024-06-10 14:38:23.082594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.578 [2024-06-10 14:38:23.082605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.578 [2024-06-10 14:38:23.082609] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.082613] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.082623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.092582] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.092629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.092640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.092644] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.092649] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.092658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.102629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.102674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.102685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.102690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.102694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.102703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.112646] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.112690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.112701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.112705] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.112709] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.112719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.122671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.122716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.122727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.122731] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.122735] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.122745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.132703] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.132751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.132761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.132766] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.132770] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.132779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.142734] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.142786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.142799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.142804] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.142808] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.142817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.152776] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.152854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.152864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.152869] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.152873] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.152883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.579 [2024-06-10 14:38:23.162787] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.579 [2024-06-10 14:38:23.162831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.579 [2024-06-10 14:38:23.162841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.579 [2024-06-10 14:38:23.162846] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.579 [2024-06-10 14:38:23.162850] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.579 [2024-06-10 14:38:23.162859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.579 qpair failed and we were unable to recover it. 00:29:45.842 [2024-06-10 14:38:23.172824] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.842 [2024-06-10 14:38:23.172871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.842 [2024-06-10 14:38:23.172881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.842 [2024-06-10 14:38:23.172886] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.842 [2024-06-10 14:38:23.172890] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.842 [2024-06-10 14:38:23.172900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.842 qpair failed and we were unable to recover it. 00:29:45.842 [2024-06-10 14:38:23.182946] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.842 [2024-06-10 14:38:23.183039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.842 [2024-06-10 14:38:23.183050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.842 [2024-06-10 14:38:23.183055] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.842 [2024-06-10 14:38:23.183059] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.842 [2024-06-10 14:38:23.183071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.842 qpair failed and we were unable to recover it. 00:29:45.842 [2024-06-10 14:38:23.192917] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.842 [2024-06-10 14:38:23.192964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.842 [2024-06-10 14:38:23.192974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.842 [2024-06-10 14:38:23.192979] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.842 [2024-06-10 14:38:23.192983] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.842 [2024-06-10 14:38:23.192993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.842 qpair failed and we were unable to recover it. 00:29:45.842 [2024-06-10 14:38:23.202840] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.842 [2024-06-10 14:38:23.202887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.842 [2024-06-10 14:38:23.202897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.842 [2024-06-10 14:38:23.202902] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.202906] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.202916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.212984] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.213030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.213040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.213045] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.213049] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.213059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.223007] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.223057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.223068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.223073] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.223077] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.223087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.232971] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.233017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.233031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.233035] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.233039] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.233049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.243059] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.243107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.243125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.243131] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.243136] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.243148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.253055] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.253107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.253125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.253130] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.253135] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.253148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.263121] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.263194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.263212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.263217] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.263222] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.263235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.273077] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.273125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.273137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.273142] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.273152] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.273163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.283136] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.283183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.283194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.283198] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.283202] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.283212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.293159] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.293213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.293224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.293229] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.293233] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.293243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.303182] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.303286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.303297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.303302] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.303306] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.303320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.313092] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.313144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.313154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.313159] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.313164] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.843 [2024-06-10 14:38:23.313174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.843 qpair failed and we were unable to recover it. 00:29:45.843 [2024-06-10 14:38:23.323257] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.843 [2024-06-10 14:38:23.323311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.843 [2024-06-10 14:38:23.323329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.843 [2024-06-10 14:38:23.323335] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.843 [2024-06-10 14:38:23.323339] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.323350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.333292] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.333353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.333364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.333369] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.333373] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.333383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.343273] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.343329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.343340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.343344] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.343348] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.343358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.353342] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.353387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.353398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.353402] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.353406] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.353416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.363357] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.363404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.363415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.363420] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.363427] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.363437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.373271] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.373368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.373378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.373383] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.373387] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.373397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.383414] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.383501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.383512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.383517] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.383521] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.383530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.393453] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.393504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.393514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.393519] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.393523] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.393533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.403482] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.403559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.403570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.403574] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.403579] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.403588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.413485] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.413534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.413545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.413549] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.413554] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.413563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.423572] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.423637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.423648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.423653] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.423657] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.423666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:45.844 [2024-06-10 14:38:23.433540] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.844 [2024-06-10 14:38:23.433593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.844 [2024-06-10 14:38:23.433603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.844 [2024-06-10 14:38:23.433608] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.844 [2024-06-10 14:38:23.433612] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:45.844 [2024-06-10 14:38:23.433622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.844 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.443581] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.443630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.443640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.107 [2024-06-10 14:38:23.443645] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.107 [2024-06-10 14:38:23.443649] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.107 [2024-06-10 14:38:23.443659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.107 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.453629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.453708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.453718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.107 [2024-06-10 14:38:23.453725] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.107 [2024-06-10 14:38:23.453729] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.107 [2024-06-10 14:38:23.453739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.107 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.463653] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.463744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.463755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.107 [2024-06-10 14:38:23.463760] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.107 [2024-06-10 14:38:23.463764] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.107 [2024-06-10 14:38:23.463773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.107 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.473683] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.473754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.473764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.107 [2024-06-10 14:38:23.473769] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.107 [2024-06-10 14:38:23.473773] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.107 [2024-06-10 14:38:23.473783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.107 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.483752] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.483823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.483833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.107 [2024-06-10 14:38:23.483838] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.107 [2024-06-10 14:38:23.483842] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.107 [2024-06-10 14:38:23.483851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.107 qpair failed and we were unable to recover it. 00:29:46.107 [2024-06-10 14:38:23.493762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.107 [2024-06-10 14:38:23.493823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.107 [2024-06-10 14:38:23.493834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.493838] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.493842] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.493852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.503766] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.503853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.503863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.503868] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.503872] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.503881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.513776] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.513821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.513831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.513836] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.513840] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.513850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.523807] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.523852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.523863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.523868] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.523872] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.523882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.533844] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.533893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.533904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.533909] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.533913] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.533923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.543902] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.543952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.543965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.543970] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.543974] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.543984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.553762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.553816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.553826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.553831] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.553835] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.553845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.563910] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.563954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.563965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.563970] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.563974] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.563983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.573968] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.574017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.574028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.574032] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.574038] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.574048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.583865] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.583920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.583932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.583936] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.583940] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.583953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.594012] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.594065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.594076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.594081] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.594085] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.594094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.604039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.604086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.604097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.604102] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.108 [2024-06-10 14:38:23.604106] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.108 [2024-06-10 14:38:23.604116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.108 qpair failed and we were unable to recover it. 00:29:46.108 [2024-06-10 14:38:23.614095] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.108 [2024-06-10 14:38:23.614180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.108 [2024-06-10 14:38:23.614190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.108 [2024-06-10 14:38:23.614195] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.614199] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.614208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.624089] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.624141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.624151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.624156] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.624160] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.624170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.634167] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.634211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.634224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.634229] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.634233] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.634243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.644141] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.644229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.644240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.644244] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.644248] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.644258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.654170] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.654247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.654258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.654262] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.654266] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.654276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.664186] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.664236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.664246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.664251] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.664255] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.664265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.674215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.674266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.674277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.674282] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.674289] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.674298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.684244] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.684294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.684304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.684309] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.684313] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.684327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.109 [2024-06-10 14:38:23.694283] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.109 [2024-06-10 14:38:23.694332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.109 [2024-06-10 14:38:23.694343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.109 [2024-06-10 14:38:23.694348] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.109 [2024-06-10 14:38:23.694352] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.109 [2024-06-10 14:38:23.694361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.109 qpair failed and we were unable to recover it. 00:29:46.372 [2024-06-10 14:38:23.704214] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.372 [2024-06-10 14:38:23.704265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.372 [2024-06-10 14:38:23.704276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.372 [2024-06-10 14:38:23.704281] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.372 [2024-06-10 14:38:23.704285] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.372 [2024-06-10 14:38:23.704295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.372 qpair failed and we were unable to recover it. 00:29:46.372 [2024-06-10 14:38:23.714387] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.372 [2024-06-10 14:38:23.714435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.372 [2024-06-10 14:38:23.714446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.372 [2024-06-10 14:38:23.714451] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.372 [2024-06-10 14:38:23.714455] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.372 [2024-06-10 14:38:23.714465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.724341] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.724393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.724404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.724409] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.724413] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.724423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.734353] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.734401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.734412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.734416] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.734421] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.734431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.744420] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.744511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.744521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.744526] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.744530] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.744539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.754421] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.754469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.754480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.754485] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.754489] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.754499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.764462] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.764508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.764519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.764524] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.764531] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.764541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.774467] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.774516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.774527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.774532] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.774536] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.774546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.784540] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.784607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.784618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.784622] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.784627] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.784636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.794534] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.794587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.794598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.373 [2024-06-10 14:38:23.794602] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.373 [2024-06-10 14:38:23.794607] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.373 [2024-06-10 14:38:23.794616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.373 qpair failed and we were unable to recover it. 00:29:46.373 [2024-06-10 14:38:23.804565] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.373 [2024-06-10 14:38:23.804615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.373 [2024-06-10 14:38:23.804625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.804630] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.804634] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.804644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.814611] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.814705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.814716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.814720] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.814724] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.814734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.824634] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.824713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.824724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.824729] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.824733] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.824743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.834668] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.834714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.834725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.834730] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.834734] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.834744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.844694] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.844740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.844751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.844755] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.844760] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.844769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.854710] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.854756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.854767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.854774] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.854778] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.854788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.864751] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.864805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.864815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.864820] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.864824] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.864834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.874758] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.874803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.874814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.874818] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.874823] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.874832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.884684] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.884736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.374 [2024-06-10 14:38:23.884747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.374 [2024-06-10 14:38:23.884752] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.374 [2024-06-10 14:38:23.884756] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.374 [2024-06-10 14:38:23.884766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.374 qpair failed and we were unable to recover it. 00:29:46.374 [2024-06-10 14:38:23.894866] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.374 [2024-06-10 14:38:23.894963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.894973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.894978] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.894982] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.894992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.904875] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.904925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.904936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.904941] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.904945] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.904954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.914910] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.914954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.914965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.914970] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.914974] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.914983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.924830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.924873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.924883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.924888] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.924892] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.924902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.934970] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.935020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.935030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.935035] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.935039] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.935049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.944982] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.945032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.945046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.945051] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.945055] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.945065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.955003] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.955048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.955059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.955064] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.955068] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.955078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.375 [2024-06-10 14:38:23.965059] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.375 [2024-06-10 14:38:23.965107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.375 [2024-06-10 14:38:23.965125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.375 [2024-06-10 14:38:23.965131] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.375 [2024-06-10 14:38:23.965136] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.375 [2024-06-10 14:38:23.965150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.375 qpair failed and we were unable to recover it. 00:29:46.637 [2024-06-10 14:38:23.975089] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:23.975142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:23.975160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:23.975166] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:23.975170] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:23.975184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:23.985135] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:23.985230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:23.985248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:23.985254] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:23.985259] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:23.985276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:23.995015] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:23.995064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:23.995076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:23.995081] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:23.995086] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:23.995096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.005136] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.005183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.005194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.005198] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.005203] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.005213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.015186] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.015236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.015246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.015251] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.015255] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.015265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.025205] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.025302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.025313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.025323] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.025327] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.025337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.035220] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.035266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.035283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.035288] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.035292] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.035302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.045215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.045270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.045281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.045286] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.045290] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.045300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.055305] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.055354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.055365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.055370] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.055374] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.055384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.065331] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.065380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.065390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.065395] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.065399] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.065409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.075309] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.075357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.075367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.075372] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.075376] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.075389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.638 [2024-06-10 14:38:24.085376] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.638 [2024-06-10 14:38:24.085457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.638 [2024-06-10 14:38:24.085468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.638 [2024-06-10 14:38:24.085473] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.638 [2024-06-10 14:38:24.085477] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.638 [2024-06-10 14:38:24.085487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.638 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.095427] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.095477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.095488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.095492] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.095496] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.095506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.105468] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.105519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.105530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.105535] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.105539] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.105549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.115453] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.115496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.115507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.115512] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.115516] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.115525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.125492] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.125575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.125586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.125591] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.125595] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.125605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.135503] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.135548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.135559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.135564] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.135568] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.135577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.145458] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.145510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.145521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.145526] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.145530] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.145540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.155559] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.155609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.155620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.155625] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.155629] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.155639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.165630] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.165674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.165685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.165690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.165697] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.165707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.175629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.175674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.175685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.175690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.175694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.175704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.185657] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.185714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.185725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.185730] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.185734] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.185743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.195717] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.195788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.195798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.195802] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.195807] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.195816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.205719] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.639 [2024-06-10 14:38:24.205769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.639 [2024-06-10 14:38:24.205779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.639 [2024-06-10 14:38:24.205784] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.639 [2024-06-10 14:38:24.205788] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.639 [2024-06-10 14:38:24.205798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.639 qpair failed and we were unable to recover it. 00:29:46.639 [2024-06-10 14:38:24.215747] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.640 [2024-06-10 14:38:24.215798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.640 [2024-06-10 14:38:24.215809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.640 [2024-06-10 14:38:24.215813] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.640 [2024-06-10 14:38:24.215818] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.640 [2024-06-10 14:38:24.215827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.640 [2024-06-10 14:38:24.225797] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.640 [2024-06-10 14:38:24.225850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.640 [2024-06-10 14:38:24.225861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.640 [2024-06-10 14:38:24.225866] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.640 [2024-06-10 14:38:24.225870] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.640 [2024-06-10 14:38:24.225880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.640 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.235818] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.235894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.235905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.235910] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.235914] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.235923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.245830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.245872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.245883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.245888] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.245892] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.245902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.255925] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.255974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.255985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.255993] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.255997] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.256007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.265851] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.265917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.265928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.265933] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.265937] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.265946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.275810] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.275858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.275869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.275874] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.275878] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.275888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.285956] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.286001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.286012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.286016] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.902 [2024-06-10 14:38:24.286021] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.902 [2024-06-10 14:38:24.286030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.902 qpair failed and we were unable to recover it. 00:29:46.902 [2024-06-10 14:38:24.295984] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.902 [2024-06-10 14:38:24.296033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.902 [2024-06-10 14:38:24.296043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.902 [2024-06-10 14:38:24.296048] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.296052] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.296062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.306010] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.306081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.306092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.306097] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.306101] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.306110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.316034] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.316082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.316092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.316097] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.316101] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.316110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.326058] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.326104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.326115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.326120] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.326124] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.326133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.335968] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.336016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.336027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.336032] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.336036] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.336046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.346108] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.346180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.346191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.346199] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.346203] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.346212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.356144] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.356199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.356217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.356222] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.356227] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.356240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.366181] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.366244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.366256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.366261] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.366265] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.366276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.376206] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.376298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.376309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.376313] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.376320] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.376331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.386237] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.386293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.386304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.386308] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.386312] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.386325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.396270] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.396340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.396350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.396355] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.396359] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.396369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.406289] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.406338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.406349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.406354] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.406358] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.903 [2024-06-10 14:38:24.406368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.903 qpair failed and we were unable to recover it. 00:29:46.903 [2024-06-10 14:38:24.416322] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.903 [2024-06-10 14:38:24.416368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.903 [2024-06-10 14:38:24.416378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.903 [2024-06-10 14:38:24.416383] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.903 [2024-06-10 14:38:24.416387] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.416397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.426337] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.426393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.426404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.426409] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.426413] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.426423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.436359] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.436405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.436419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.436423] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.436427] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.436437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.446386] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.446436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.446446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.446451] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.446456] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.446465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.456427] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.456474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.456485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.456490] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.456494] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.456504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.466495] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.466548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.466558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.466563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.466567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.466577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.476498] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.476545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.476556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.476561] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.476565] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.476578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:46.904 [2024-06-10 14:38:24.486492] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.904 [2024-06-10 14:38:24.486535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.904 [2024-06-10 14:38:24.486546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.904 [2024-06-10 14:38:24.486551] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.904 [2024-06-10 14:38:24.486555] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:46.904 [2024-06-10 14:38:24.486565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.904 qpair failed and we were unable to recover it. 00:29:47.167 [2024-06-10 14:38:24.496585] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.167 [2024-06-10 14:38:24.496653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.167 [2024-06-10 14:38:24.496664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.167 [2024-06-10 14:38:24.496668] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.167 [2024-06-10 14:38:24.496673] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.167 [2024-06-10 14:38:24.496682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.167 qpair failed and we were unable to recover it. 00:29:47.167 [2024-06-10 14:38:24.506510] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.167 [2024-06-10 14:38:24.506560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.167 [2024-06-10 14:38:24.506570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.167 [2024-06-10 14:38:24.506575] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.167 [2024-06-10 14:38:24.506579] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.167 [2024-06-10 14:38:24.506589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.167 qpair failed and we were unable to recover it. 00:29:47.167 [2024-06-10 14:38:24.516495] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.167 [2024-06-10 14:38:24.516538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.167 [2024-06-10 14:38:24.516548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.167 [2024-06-10 14:38:24.516553] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.516557] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.516567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.526624] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.526671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.526685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.526690] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.526694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.526703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.536657] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.536705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.536716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.536720] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.536725] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.536734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.546673] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.546728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.546738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.546743] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.546747] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.546757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.556699] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.556748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.556758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.556763] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.556767] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.556777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.566730] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.566775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.566785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.566790] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.566797] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.566806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.576775] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.576824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.576834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.576838] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.576843] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.576852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.586791] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.586888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.586898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.586903] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.586907] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.586917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.596842] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.596888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.596899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.596903] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.596908] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.596917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.606847] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.606892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.606903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.606908] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.606912] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.606922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.616898] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.616987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.616998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.617002] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.617007] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.617016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.626912] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.626959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.626970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.626974] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.626978] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.626988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.636915] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.168 [2024-06-10 14:38:24.636976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.168 [2024-06-10 14:38:24.636987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.168 [2024-06-10 14:38:24.636991] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.168 [2024-06-10 14:38:24.636996] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.168 [2024-06-10 14:38:24.637006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.168 qpair failed and we were unable to recover it. 00:29:47.168 [2024-06-10 14:38:24.646966] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.647018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.647028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.647033] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.647037] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.647046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.656996] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.657046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.657064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.657074] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.657079] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.657092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.667045] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.667130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.667148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.667153] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.667158] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.667171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.677029] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.677074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.677086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.677090] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.677095] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.677105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.686936] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.686982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.686993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.686998] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.687002] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.687012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.697089] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.697141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.697153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.697158] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.697162] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.697173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.707172] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.707249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.707267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.707273] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.707278] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.707291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.717157] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.717207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.717218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.717223] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.717227] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.717238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.727200] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.727243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.727254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.727259] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.727263] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.727273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.737207] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.737254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.737264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.737269] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.737273] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.737283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.747244] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.747301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.747312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.747323] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.747328] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.747338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.169 [2024-06-10 14:38:24.757275] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.169 [2024-06-10 14:38:24.757367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.169 [2024-06-10 14:38:24.757378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.169 [2024-06-10 14:38:24.757382] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.169 [2024-06-10 14:38:24.757386] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.169 [2024-06-10 14:38:24.757396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.169 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.767303] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.767352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.767362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.767367] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.767372] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.767381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.777312] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.777362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.777373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.777377] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.777382] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.777392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.787369] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.787463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.787474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.787479] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.787483] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.787494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.797396] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.797463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.797474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.797478] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.797482] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.797492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.807412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.807459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.807469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.807474] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.807478] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.807488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.817463] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.817511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.817521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.817526] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.817530] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.817540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.827479] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.827565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.827576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.827580] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.827585] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.827594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.837510] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.837576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.837592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.837597] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.837601] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.837611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.847538] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.847586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.847596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.847601] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.847605] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.847615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.857588] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.857632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.857642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.857647] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.857651] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.857661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.867594] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.867642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.867653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.432 [2024-06-10 14:38:24.867657] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.432 [2024-06-10 14:38:24.867661] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.432 [2024-06-10 14:38:24.867671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.432 qpair failed and we were unable to recover it. 00:29:47.432 [2024-06-10 14:38:24.877611] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.432 [2024-06-10 14:38:24.877656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.432 [2024-06-10 14:38:24.877666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.877671] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.877675] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.877687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.887645] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.887688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.887699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.887703] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.887708] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.887717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.897672] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.897721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.897731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.897735] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.897739] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.897749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.907757] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.907811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.907821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.907826] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.907830] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.907839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.917731] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.917775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.917785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.917790] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.917794] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.917803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.927772] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.927821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.927834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.927839] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.927843] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.927852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.937795] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.937882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.937893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.937898] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.937902] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.937911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.947810] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.947861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.947873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.947877] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.947881] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.947892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.957877] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.957964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.957974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.957979] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.957983] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.957993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.967858] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.967904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.967915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.967920] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.967927] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.967937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.977909] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.977955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.977965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.977970] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.977974] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.977983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.987931] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.987983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.987994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.987999] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.433 [2024-06-10 14:38:24.988003] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.433 [2024-06-10 14:38:24.988012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.433 qpair failed and we were unable to recover it. 00:29:47.433 [2024-06-10 14:38:24.997998] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.433 [2024-06-10 14:38:24.998065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.433 [2024-06-10 14:38:24.998075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.433 [2024-06-10 14:38:24.998080] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.434 [2024-06-10 14:38:24.998084] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.434 [2024-06-10 14:38:24.998093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.434 qpair failed and we were unable to recover it. 00:29:47.434 [2024-06-10 14:38:25.007980] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.434 [2024-06-10 14:38:25.008030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.434 [2024-06-10 14:38:25.008040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.434 [2024-06-10 14:38:25.008045] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.434 [2024-06-10 14:38:25.008049] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.434 [2024-06-10 14:38:25.008058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.434 qpair failed and we were unable to recover it. 00:29:47.434 [2024-06-10 14:38:25.017891] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.434 [2024-06-10 14:38:25.017986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.434 [2024-06-10 14:38:25.017997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.434 [2024-06-10 14:38:25.018001] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.434 [2024-06-10 14:38:25.018006] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.434 [2024-06-10 14:38:25.018015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.434 qpair failed and we were unable to recover it. 00:29:47.696 [2024-06-10 14:38:25.028045] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.696 [2024-06-10 14:38:25.028097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.696 [2024-06-10 14:38:25.028107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.696 [2024-06-10 14:38:25.028112] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.696 [2024-06-10 14:38:25.028116] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.696 [2024-06-10 14:38:25.028127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.696 qpair failed and we were unable to recover it. 00:29:47.696 [2024-06-10 14:38:25.038072] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.696 [2024-06-10 14:38:25.038119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.696 [2024-06-10 14:38:25.038130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.038135] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.038139] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.038149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.048098] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.048148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.048166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.048172] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.048177] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.048190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.058118] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.058165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.058176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.058181] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.058189] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.058199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.068160] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.068209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.068220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.068225] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.068229] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.068240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.078190] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.078268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.078279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.078284] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.078288] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.078298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.088221] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.088263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.088274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.088279] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.088283] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.088293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.098235] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.098283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.098294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.098298] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.098302] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.098312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.108288] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.108366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.108376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.108381] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.108385] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.108395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.118302] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.118350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.118361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.118365] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.118369] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.118379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.128342] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.128394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.128407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.128411] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.128416] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.128427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.138398] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.138446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.138458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.138463] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.138467] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.138477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.148399] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.148452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.148464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.148471] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.148475] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.697 [2024-06-10 14:38:25.148486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.697 qpair failed and we were unable to recover it. 00:29:47.697 [2024-06-10 14:38:25.158309] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.697 [2024-06-10 14:38:25.158358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.697 [2024-06-10 14:38:25.158369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.697 [2024-06-10 14:38:25.158374] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.697 [2024-06-10 14:38:25.158378] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.158388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.168441] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.168489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.168499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.168504] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.168508] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.168518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.178484] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.178532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.178543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.178547] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.178551] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.178561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.188506] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.188560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.188570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.188575] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.188579] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.188588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.198566] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.198635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.198645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.198650] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.198654] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.198664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.208569] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.208615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.208625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.208630] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.208634] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.208644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.218602] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.218649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.218660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.218664] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.218669] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.218678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.228622] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.228705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.228715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.228720] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.228724] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.228734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.238698] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.238760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.238773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.238778] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.238782] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.238792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.248680] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.248723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.248735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.248739] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.248744] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.248754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.258694] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.258739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.258749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.258754] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.258758] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.258767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.268720] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.268774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.268785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.268789] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.268793] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.268803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.698 qpair failed and we were unable to recover it. 00:29:47.698 [2024-06-10 14:38:25.278623] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.698 [2024-06-10 14:38:25.278669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.698 [2024-06-10 14:38:25.278680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.698 [2024-06-10 14:38:25.278685] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.698 [2024-06-10 14:38:25.278689] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.698 [2024-06-10 14:38:25.278702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.699 qpair failed and we were unable to recover it. 00:29:47.699 [2024-06-10 14:38:25.288774] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.699 [2024-06-10 14:38:25.288821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.699 [2024-06-10 14:38:25.288832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.699 [2024-06-10 14:38:25.288837] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.699 [2024-06-10 14:38:25.288841] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.699 [2024-06-10 14:38:25.288851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.699 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.298816] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.298866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.298876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.298881] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.298885] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.298895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.308827] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.308878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.308889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.308893] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.308897] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.308907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.318861] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.318947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.318958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.318963] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.318967] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.318977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.328877] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.328924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.328938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.328943] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.328947] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.328957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.338932] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.338979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.338990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.338995] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.338999] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.339009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.348945] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.349000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.349011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.349016] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.349020] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.349029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.358956] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.359003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.359013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.359018] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.359022] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.359031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.368990] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.369034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.369045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.369049] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.369056] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.369066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.378901] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.378962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.378973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.378977] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.378981] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.378991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.389051] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.389100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.389111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.389116] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.963 [2024-06-10 14:38:25.389120] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.963 [2024-06-10 14:38:25.389130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-06-10 14:38:25.399077] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.963 [2024-06-10 14:38:25.399127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.963 [2024-06-10 14:38:25.399138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.963 [2024-06-10 14:38:25.399143] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.399147] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.399157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.409103] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.409154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.409172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.409178] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.409182] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.409196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.419152] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.419249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.419261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.419266] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.419271] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.419281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.429156] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.429206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.429217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.429222] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.429226] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.429236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.439171] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.439231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.439243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.439247] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.439252] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.439261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.449226] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.449276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.449286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.449291] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.449295] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.449305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.459266] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.459357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.459368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.459374] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.459381] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.459392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.469245] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.469312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.469326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.469331] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.469335] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.469345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.479268] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.479319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.479329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.479334] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.479338] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.479348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.489293] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.489343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.489354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.489358] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.489362] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.489373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.499362] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.499405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.499416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.499421] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.499425] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.499435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.509383] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.509468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.509478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.509483] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.509487] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.509497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-06-10 14:38:25.519398] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.964 [2024-06-10 14:38:25.519448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.964 [2024-06-10 14:38:25.519459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.964 [2024-06-10 14:38:25.519464] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.964 [2024-06-10 14:38:25.519468] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.964 [2024-06-10 14:38:25.519478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.965 [2024-06-10 14:38:25.529321] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.965 [2024-06-10 14:38:25.529365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.965 [2024-06-10 14:38:25.529376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.965 [2024-06-10 14:38:25.529381] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.965 [2024-06-10 14:38:25.529385] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.965 [2024-06-10 14:38:25.529395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-06-10 14:38:25.539486] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.965 [2024-06-10 14:38:25.539538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.965 [2024-06-10 14:38:25.539549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.965 [2024-06-10 14:38:25.539554] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.965 [2024-06-10 14:38:25.539558] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.965 [2024-06-10 14:38:25.539568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-06-10 14:38:25.549511] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.965 [2024-06-10 14:38:25.549604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.965 [2024-06-10 14:38:25.549614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.965 [2024-06-10 14:38:25.549623] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.965 [2024-06-10 14:38:25.549628] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:47.965 [2024-06-10 14:38:25.549637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.965 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.559505] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.559550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.559561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.559566] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.559570] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.559580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.569523] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.569568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.569578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.569583] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.569587] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.569597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.579582] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.579629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.579639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.579644] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.579648] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.579657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.589591] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.589650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.589660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.589665] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.589669] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.589678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.599609] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.599660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.599670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.599675] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.599679] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.599689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.609686] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.609734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.609744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.609749] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.228 [2024-06-10 14:38:25.609753] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.228 [2024-06-10 14:38:25.609763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.228 qpair failed and we were unable to recover it. 00:29:48.228 [2024-06-10 14:38:25.619709] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.228 [2024-06-10 14:38:25.619757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.228 [2024-06-10 14:38:25.619769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.228 [2024-06-10 14:38:25.619776] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.619780] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.619789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.629704] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.629767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.629777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.629781] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.629785] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.629795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.639760] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.639806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.639819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.639824] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.639828] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.639838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.649789] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.649832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.649842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.649847] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.649851] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.649860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.659798] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.659843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.659854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.659859] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.659863] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.659872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.669832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.669884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.669895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.669899] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.669903] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.669913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.679863] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.679915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.679926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.679930] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.679935] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.679947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.689888] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.689930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.689941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.689946] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.689950] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.689960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.699911] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.699958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.699969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.699973] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.699978] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.699987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.709937] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.709989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.710000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.710004] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.710008] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.710018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.719960] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.720005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.720016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.720021] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.720025] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.720035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.729901] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.729999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.730012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.730017] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.730021] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.730031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.229 qpair failed and we were unable to recover it. 00:29:48.229 [2024-06-10 14:38:25.740013] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.229 [2024-06-10 14:38:25.740111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.229 [2024-06-10 14:38:25.740122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.229 [2024-06-10 14:38:25.740127] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.229 [2024-06-10 14:38:25.740131] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.229 [2024-06-10 14:38:25.740142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.750048] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.750099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.750111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.750116] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.750120] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.750129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.760076] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.760127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.760138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.760143] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.760147] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.760157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.770124] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.770169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.770180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.770184] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.770188] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.770201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.780127] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.780177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.780188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.780193] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.780197] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.780207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.790161] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.790248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.790259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.790264] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.790268] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.790278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.800179] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.800226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.800237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.800242] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.800246] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.800255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.810255] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.810301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.810312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.810320] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.810325] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.810335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.230 [2024-06-10 14:38:25.820343] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.230 [2024-06-10 14:38:25.820401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.230 [2024-06-10 14:38:25.820412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.230 [2024-06-10 14:38:25.820417] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.230 [2024-06-10 14:38:25.820421] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.230 [2024-06-10 14:38:25.820431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.230 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.830252] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.830309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.830329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.830334] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.830338] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.830349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.840301] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.840385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.840395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.840400] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.840404] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.840414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.850323] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.850369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.850379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.850383] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.850388] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.850397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.860344] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.860398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.860408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.860413] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.860420] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.860430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.870248] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.870331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.870343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.870348] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.870352] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.870362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.880404] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.880478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.880488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.880493] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.880497] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.880507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.890311] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.890371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.890382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.890386] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.890391] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.890400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.900426] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.900473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.900483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.900488] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.900492] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.900502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.910532] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.910579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.910589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.910594] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.910598] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.910608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.920396] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.920440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.920451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.920456] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.920460] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.920470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.930559] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.930606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.930617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.930622] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.930626] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.493 [2024-06-10 14:38:25.930635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.493 qpair failed and we were unable to recover it. 00:29:48.493 [2024-06-10 14:38:25.940539] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.493 [2024-06-10 14:38:25.940585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.493 [2024-06-10 14:38:25.940596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.493 [2024-06-10 14:38:25.940600] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.493 [2024-06-10 14:38:25.940605] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.940614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:25.950589] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:25.950637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:25.950647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:25.950654] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:25.950658] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.950668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:25.960636] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:25.960703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:25.960713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:25.960718] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:25.960722] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.960731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:25.970652] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:25.970713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:25.970723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:25.970728] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:25.970732] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.970741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:25.980683] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:25.980735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:25.980746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:25.980751] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:25.980755] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.980764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:25.990695] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:25.990749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:25.990761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:25.990766] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:25.990770] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:25.990780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.000729] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.000773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.000784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.000789] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.000793] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.000803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.010760] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.010853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.010864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.010868] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.010872] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.010882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.020797] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.020845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.020855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.020860] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.020864] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.020873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.030819] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.030868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.030879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.030883] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.030887] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.030897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.040827] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.040877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.040888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.040895] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.040899] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.040909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.050874] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.050917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.050927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.050932] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.050936] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.050946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.060908] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.060957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.060968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.060972] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.060977] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.060986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.494 [2024-06-10 14:38:26.070931] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.494 [2024-06-10 14:38:26.070978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.494 [2024-06-10 14:38:26.070988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.494 [2024-06-10 14:38:26.070993] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.494 [2024-06-10 14:38:26.070997] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.494 [2024-06-10 14:38:26.071006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.494 qpair failed and we were unable to recover it. 00:29:48.495 [2024-06-10 14:38:26.080975] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.495 [2024-06-10 14:38:26.081024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.495 [2024-06-10 14:38:26.081034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.495 [2024-06-10 14:38:26.081039] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.495 [2024-06-10 14:38:26.081043] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.495 [2024-06-10 14:38:26.081053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.495 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 14:38:26.090995] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.756 [2024-06-10 14:38:26.091043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.756 [2024-06-10 14:38:26.091054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.756 [2024-06-10 14:38:26.091058] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.756 [2024-06-10 14:38:26.091062] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.756 [2024-06-10 14:38:26.091072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 14:38:26.101015] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.756 [2024-06-10 14:38:26.101104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.756 [2024-06-10 14:38:26.101122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.756 [2024-06-10 14:38:26.101128] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.756 [2024-06-10 14:38:26.101132] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.756 [2024-06-10 14:38:26.101145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.756 qpair failed and we were unable to recover it. 00:29:48.756 [2024-06-10 14:38:26.111042] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.756 [2024-06-10 14:38:26.111096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.756 [2024-06-10 14:38:26.111115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.756 [2024-06-10 14:38:26.111120] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.756 [2024-06-10 14:38:26.111125] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd7a4000b90 00:29:48.756 [2024-06-10 14:38:26.111138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 14:38:26.121074] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.757 [2024-06-10 14:38:26.121182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.757 [2024-06-10 14:38:26.121244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.757 [2024-06-10 14:38:26.121269] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.757 [2024-06-10 14:38:26.121290] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd79c000b90 00:29:48.757 [2024-06-10 14:38:26.121352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 14:38:26.131083] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.757 [2024-06-10 14:38:26.131168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.757 [2024-06-10 14:38:26.131222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.757 [2024-06-10 14:38:26.131240] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.757 [2024-06-10 14:38:26.131255] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd79c000b90 00:29:48.757 [2024-06-10 14:38:26.131294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.757 qpair failed and we were unable to recover it. 00:29:48.757 [2024-06-10 14:38:26.131441] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:48.757 A controller has encountered a failure and is being reset. 00:29:48.757 [2024-06-10 14:38:26.131491] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d2230 (9): Bad file descriptor 00:29:48.757 Controller properly reset. 00:29:48.757 Initializing NVMe Controllers 00:29:48.757 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:48.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:48.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:48.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:48.757 Initialization complete. Launching workers. 00:29:48.757 Starting thread on core 1 00:29:48.757 Starting thread on core 2 00:29:48.757 Starting thread on core 3 00:29:48.757 Starting thread on core 0 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:48.757 00:29:48.757 real 0m11.342s 00:29:48.757 user 0m21.990s 00:29:48.757 sys 0m3.491s 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.757 ************************************ 00:29:48.757 END TEST nvmf_target_disconnect_tc2 00:29:48.757 ************************************ 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:48.757 rmmod nvme_tcp 00:29:48.757 rmmod nvme_fabrics 00:29:48.757 rmmod nvme_keyring 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3223408 ']' 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3223408 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3223408 ']' 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 3223408 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:48.757 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3223408 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3223408' 00:29:49.018 killing process with pid 3223408 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 3223408 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 3223408 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.018 14:38:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.585 14:38:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:51.585 00:29:51.585 real 0m21.177s 00:29:51.585 user 0m49.307s 00:29:51.585 sys 0m9.148s 00:29:51.585 14:38:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:51.585 14:38:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:51.585 ************************************ 00:29:51.585 END TEST nvmf_target_disconnect 00:29:51.585 ************************************ 00:29:51.585 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:51.585 14:38:28 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:51.585 14:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.585 14:38:28 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:51.585 00:29:51.585 real 22m43.661s 00:29:51.585 user 49m8.591s 00:29:51.585 sys 6m57.647s 00:29:51.585 14:38:28 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:51.585 14:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.585 ************************************ 00:29:51.585 END TEST nvmf_tcp 00:29:51.585 ************************************ 00:29:51.585 14:38:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:51.585 14:38:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:51.585 14:38:28 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:51.585 14:38:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:51.585 14:38:28 -- common/autotest_common.sh@10 -- # set +x 00:29:51.585 ************************************ 00:29:51.585 START TEST spdkcli_nvmf_tcp 00:29:51.586 ************************************ 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:51.586 * Looking for test storage... 00:29:51.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3225282 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3225282 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 3225282 ']' 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.586 14:38:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:51.586 [2024-06-10 14:38:28.939185] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:29:51.586 [2024-06-10 14:38:28.939250] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225282 ] 00:29:51.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.586 [2024-06-10 14:38:29.015437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.586 [2024-06-10 14:38:29.083414] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.586 [2024-06-10 14:38:29.083419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.156 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:52.156 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:52.156 14:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:52.156 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:52.156 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.416 14:38:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:52.416 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:52.416 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:52.416 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:52.416 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:52.416 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:52.416 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:52.416 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.416 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.416 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:52.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:52.416 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:52.416 ' 00:29:54.960 [2024-06-10 14:38:32.167967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.902 [2024-06-10 14:38:33.331761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:58.445 [2024-06-10 14:38:35.470006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:59.830 [2024-06-10 14:38:37.303435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:01.214 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:01.214 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:01.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:01.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:01.214 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:01.474 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.475 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:01.475 14:38:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:01.735 14:38:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.996 14:38:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:01.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:01.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:01.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:01.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:01.996 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:01.996 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:01.996 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:01.996 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:01.996 ' 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:07.323 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:07.323 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:07.323 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:07.323 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3225282 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 3225282 ']' 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 3225282 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:07.323 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3225282 00:30:07.324 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:07.324 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:07.324 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3225282' 00:30:07.324 killing process with pid 3225282 00:30:07.324 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 3225282 00:30:07.324 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 3225282 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3225282 ']' 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3225282 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 3225282 ']' 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 3225282 00:30:07.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3225282) - No such process 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 3225282 is not found' 00:30:07.584 Process with pid 3225282 is not found 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:07.584 14:38:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:07.584 00:30:07.584 real 0m16.235s 00:30:07.584 user 0m34.266s 00:30:07.584 sys 0m0.776s 00:30:07.584 14:38:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:07.584 14:38:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.584 ************************************ 00:30:07.584 END TEST spdkcli_nvmf_tcp 00:30:07.584 ************************************ 00:30:07.584 14:38:45 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:07.584 14:38:45 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:07.584 14:38:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:07.584 14:38:45 -- common/autotest_common.sh@10 -- # set +x 00:30:07.584 ************************************ 00:30:07.584 START TEST nvmf_identify_passthru 00:30:07.584 ************************************ 00:30:07.584 14:38:45 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:07.584 * Looking for test storage... 00:30:07.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:07.584 14:38:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.584 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:07.845 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:07.845 14:38:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.845 14:38:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.845 14:38:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:07.846 14:38:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.846 14:38:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.846 14:38:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.846 14:38:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:07.846 14:38:45 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:07.846 14:38:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:14.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:14.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.433 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:14.434 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:14.434 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:14.434 14:38:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.434 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:14.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:30:14.696 00:30:14.696 --- 10.0.0.2 ping statistics --- 00:30:14.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.696 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:30:14.696 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:30:14.696 00:30:14.696 --- 10.0.0.1 ping statistics --- 00:30:14.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.696 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:14.958 14:38:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:30:14.958 14:38:52 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:14.958 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:14.958 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.528 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:15.528 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:15.528 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:15.528 14:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:15.528 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3232276 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:16.100 14:38:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3232276 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 3232276 ']' 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:16.100 14:38:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.100 [2024-06-10 14:38:53.501967] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:30:16.100 [2024-06-10 14:38:53.502052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.100 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.100 [2024-06-10 14:38:53.589002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.100 [2024-06-10 14:38:53.686221] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.100 [2024-06-10 14:38:53.686279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.100 [2024-06-10 14:38:53.686287] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.100 [2024-06-10 14:38:53.686294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.100 [2024-06-10 14:38:53.686300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.100 [2024-06-10 14:38:53.686437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.100 [2024-06-10 14:38:53.686735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.100 [2024-06-10 14:38:53.686906] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.100 [2024-06-10 14:38:53.686907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.041 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:30:17.042 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.042 INFO: Log level set to 20 00:30:17.042 INFO: Requests: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "method": "nvmf_set_config", 00:30:17.042 "id": 1, 00:30:17.042 "params": { 00:30:17.042 "admin_cmd_passthru": { 00:30:17.042 "identify_ctrlr": true 00:30:17.042 } 00:30:17.042 } 00:30:17.042 } 00:30:17.042 00:30:17.042 INFO: response: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "id": 1, 00:30:17.042 "result": true 00:30:17.042 } 00:30:17.042 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.042 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.042 INFO: Setting log level to 20 00:30:17.042 INFO: Setting log level to 20 00:30:17.042 INFO: Log level set to 20 00:30:17.042 INFO: Log level set to 20 00:30:17.042 INFO: Requests: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "method": "framework_start_init", 00:30:17.042 "id": 1 00:30:17.042 } 00:30:17.042 00:30:17.042 INFO: Requests: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "method": "framework_start_init", 00:30:17.042 "id": 1 00:30:17.042 } 00:30:17.042 00:30:17.042 [2024-06-10 14:38:54.460741] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:17.042 INFO: response: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "id": 1, 00:30:17.042 "result": true 00:30:17.042 } 00:30:17.042 00:30:17.042 INFO: response: 00:30:17.042 { 00:30:17.042 "jsonrpc": "2.0", 00:30:17.042 "id": 1, 00:30:17.042 "result": true 00:30:17.042 } 00:30:17.042 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.042 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.042 INFO: Setting log level to 40 00:30:17.042 INFO: Setting log level to 40 00:30:17.042 INFO: Setting log level to 40 00:30:17.042 [2024-06-10 14:38:54.473989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.042 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.042 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.042 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.303 Nvme0n1 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.303 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.303 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.303 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.303 [2024-06-10 14:38:54.860508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.303 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.303 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.303 [ 00:30:17.303 { 00:30:17.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:17.304 "subtype": "Discovery", 00:30:17.304 "listen_addresses": [], 00:30:17.304 "allow_any_host": true, 00:30:17.304 "hosts": [] 00:30:17.304 }, 00:30:17.304 { 00:30:17.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.304 "subtype": "NVMe", 00:30:17.304 "listen_addresses": [ 00:30:17.304 { 00:30:17.304 "trtype": "TCP", 00:30:17.304 "adrfam": "IPv4", 00:30:17.304 "traddr": "10.0.0.2", 00:30:17.304 "trsvcid": "4420" 00:30:17.304 } 00:30:17.304 ], 00:30:17.304 "allow_any_host": true, 00:30:17.304 "hosts": [], 00:30:17.304 "serial_number": "SPDK00000000000001", 00:30:17.304 "model_number": "SPDK bdev Controller", 00:30:17.304 "max_namespaces": 1, 00:30:17.304 "min_cntlid": 1, 00:30:17.304 "max_cntlid": 65519, 00:30:17.304 "namespaces": [ 00:30:17.304 { 00:30:17.304 "nsid": 1, 00:30:17.304 "bdev_name": "Nvme0n1", 00:30:17.304 "name": "Nvme0n1", 00:30:17.304 "nguid": "36344730526054870025384500000040", 00:30:17.304 "uuid": "36344730-5260-5487-0025-384500000040" 00:30:17.304 } 00:30:17.304 ] 00:30:17.304 } 00:30:17.304 ] 00:30:17.304 14:38:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.304 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:17.304 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:17.304 14:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:17.565 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:17.827 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.827 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.827 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.827 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:17.827 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:17.827 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:17.827 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:17.827 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.827 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:18.087 rmmod nvme_tcp 00:30:18.087 rmmod nvme_fabrics 00:30:18.087 rmmod nvme_keyring 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3232276 ']' 00:30:18.087 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3232276 00:30:18.087 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 3232276 ']' 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 3232276 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3232276 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3232276' 00:30:18.088 killing process with pid 3232276 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 3232276 00:30:18.088 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 3232276 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:18.349 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.349 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:18.349 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.403 14:38:57 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:20.403 00:30:20.403 real 0m12.799s 00:30:20.403 user 0m10.819s 00:30:20.403 sys 0m6.098s 00:30:20.403 14:38:57 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:20.403 14:38:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:20.403 ************************************ 00:30:20.403 END TEST nvmf_identify_passthru 00:30:20.403 ************************************ 00:30:20.403 14:38:57 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:20.403 14:38:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:20.403 14:38:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:20.403 14:38:57 -- common/autotest_common.sh@10 -- # set +x 00:30:20.403 ************************************ 00:30:20.403 START TEST nvmf_dif 00:30:20.403 ************************************ 00:30:20.403 14:38:57 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:20.664 * Looking for test storage... 00:30:20.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.664 14:38:58 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.664 14:38:58 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.664 14:38:58 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.664 14:38:58 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.664 14:38:58 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.664 14:38:58 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.665 14:38:58 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.665 14:38:58 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.665 14:38:58 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:20.665 14:38:58 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.665 14:38:58 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:20.665 14:38:58 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:20.665 14:38:58 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:20.665 14:38:58 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:20.665 14:38:58 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.665 14:38:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:20.665 14:38:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:20.665 14:38:58 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.665 14:38:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:27.250 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:27.250 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:27.250 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:27.250 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:30:27.250 00:30:27.250 --- 10.0.0.2 ping statistics --- 00:30:27.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.250 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:27.250 00:30:27.250 --- 10.0.0.1 ping statistics --- 00:30:27.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.250 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:27.250 14:39:04 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:30.551 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:30.551 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:30.551 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.812 14:39:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:30.812 14:39:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3238419 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3238419 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 3238419 ']' 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:30.812 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 14:39:08 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:30.812 [2024-06-10 14:39:08.282539] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:30:30.812 [2024-06-10 14:39:08.282600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.812 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.812 [2024-06-10 14:39:08.354529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.072 [2024-06-10 14:39:08.448391] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.072 [2024-06-10 14:39:08.448450] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.072 [2024-06-10 14:39:08.448459] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.072 [2024-06-10 14:39:08.448466] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.072 [2024-06-10 14:39:08.448472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.072 [2024-06-10 14:39:08.448498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.072 14:39:08 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:31.072 14:39:08 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:30:31.072 14:39:08 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.072 14:39:08 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:31.072 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.072 14:39:08 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.072 14:39:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:31.072 14:39:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.073 [2024-06-10 14:39:08.600036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.073 14:39:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:31.073 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.073 ************************************ 00:30:31.073 START TEST fio_dif_1_default 00:30:31.073 ************************************ 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.073 bdev_null0 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.073 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.333 [2024-06-10 14:39:08.688447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.333 { 00:30:31.333 "params": { 00:30:31.333 "name": "Nvme$subsystem", 00:30:31.333 "trtype": "$TEST_TRANSPORT", 00:30:31.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.333 "adrfam": "ipv4", 00:30:31.333 "trsvcid": "$NVMF_PORT", 00:30:31.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.333 "hdgst": ${hdgst:-false}, 00:30:31.333 "ddgst": ${ddgst:-false} 00:30:31.333 }, 00:30:31.333 "method": "bdev_nvme_attach_controller" 00:30:31.333 } 00:30:31.333 EOF 00:30:31.333 )") 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:31.333 14:39:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.333 "params": { 00:30:31.333 "name": "Nvme0", 00:30:31.333 "trtype": "tcp", 00:30:31.333 "traddr": "10.0.0.2", 00:30:31.334 "adrfam": "ipv4", 00:30:31.334 "trsvcid": "4420", 00:30:31.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.334 "hdgst": false, 00:30:31.334 "ddgst": false 00:30:31.334 }, 00:30:31.334 "method": "bdev_nvme_attach_controller" 00:30:31.334 }' 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.334 14:39:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.595 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:31.595 fio-3.35 00:30:31.595 Starting 1 thread 00:30:31.595 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.833 00:30:43.833 filename0: (groupid=0, jobs=1): err= 0: pid=3238963: Mon Jun 10 14:39:19 2024 00:30:43.833 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10037msec) 00:30:43.833 slat (nsec): min=8230, max=62726, avg=8549.84, stdev=2219.27 00:30:43.833 clat (usec): min=492, max=43458, avg=41278.78, stdev=5273.59 00:30:43.833 lat (usec): min=500, max=43500, avg=41287.33, stdev=5273.47 00:30:43.833 clat percentiles (usec): 00:30:43.833 | 1.00th=[ 529], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:30:43.833 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:43.833 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:43.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:43.833 | 99.99th=[43254] 00:30:43.833 bw ( KiB/s): min= 352, max= 416, per=99.91%, avg=387.20, stdev=14.31, samples=20 00:30:43.833 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:30:43.833 lat (usec) : 500=0.21%, 750=1.44% 00:30:43.833 lat (msec) : 50=98.35% 00:30:43.833 cpu : usr=95.44%, sys=4.19%, ctx=37, majf=0, minf=242 00:30:43.833 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.833 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.833 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:43.833 00:30:43.833 Run status group 0 (all jobs): 00:30:43.833 READ: bw=387KiB/s (397kB/s), 387KiB/s-387KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10037-10037msec 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.833 00:30:43.833 real 0m11.184s 00:30:43.833 user 0m18.362s 00:30:43.833 sys 0m0.836s 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:43.833 14:39:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 ************************************ 00:30:43.833 END TEST fio_dif_1_default 00:30:43.833 ************************************ 00:30:43.833 14:39:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:43.834 14:39:19 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:43.834 14:39:19 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 ************************************ 00:30:43.834 START TEST fio_dif_1_multi_subsystems 00:30:43.834 ************************************ 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 bdev_null0 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 [2024-06-10 14:39:19.951538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 bdev_null1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.834 14:39:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.834 { 00:30:43.834 "params": { 00:30:43.834 "name": "Nvme$subsystem", 00:30:43.834 "trtype": "$TEST_TRANSPORT", 00:30:43.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.834 "adrfam": "ipv4", 00:30:43.834 "trsvcid": "$NVMF_PORT", 00:30:43.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.834 "hdgst": ${hdgst:-false}, 00:30:43.834 "ddgst": ${ddgst:-false} 00:30:43.834 }, 00:30:43.834 "method": "bdev_nvme_attach_controller" 00:30:43.834 } 00:30:43.834 EOF 00:30:43.834 )") 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.834 { 00:30:43.834 "params": { 00:30:43.834 "name": "Nvme$subsystem", 00:30:43.834 "trtype": "$TEST_TRANSPORT", 00:30:43.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.834 "adrfam": "ipv4", 00:30:43.834 "trsvcid": "$NVMF_PORT", 00:30:43.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.834 "hdgst": ${hdgst:-false}, 00:30:43.834 "ddgst": ${ddgst:-false} 00:30:43.834 }, 00:30:43.834 "method": "bdev_nvme_attach_controller" 00:30:43.834 } 00:30:43.834 EOF 00:30:43.834 )") 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.834 "params": { 00:30:43.834 "name": "Nvme0", 00:30:43.834 "trtype": "tcp", 00:30:43.834 "traddr": "10.0.0.2", 00:30:43.834 "adrfam": "ipv4", 00:30:43.834 "trsvcid": "4420", 00:30:43.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.834 "hdgst": false, 00:30:43.834 "ddgst": false 00:30:43.834 }, 00:30:43.834 "method": "bdev_nvme_attach_controller" 00:30:43.834 },{ 00:30:43.834 "params": { 00:30:43.834 "name": "Nvme1", 00:30:43.834 "trtype": "tcp", 00:30:43.834 "traddr": "10.0.0.2", 00:30:43.834 "adrfam": "ipv4", 00:30:43.834 "trsvcid": "4420", 00:30:43.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.834 "hdgst": false, 00:30:43.834 "ddgst": false 00:30:43.834 }, 00:30:43.834 "method": "bdev_nvme_attach_controller" 00:30:43.834 }' 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:43.834 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.835 14:39:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.835 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:43.835 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:43.835 fio-3.35 00:30:43.835 Starting 2 threads 00:30:43.835 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.837 00:30:53.837 filename0: (groupid=0, jobs=1): err= 0: pid=3241723: Mon Jun 10 14:39:31 2024 00:30:53.837 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10034msec) 00:30:53.837 slat (nsec): min=8179, max=29415, avg=8599.51, stdev=1635.66 00:30:53.837 clat (usec): min=40738, max=43031, avg=41098.50, stdev=367.15 00:30:53.837 lat (usec): min=40746, max=43040, avg=41107.10, stdev=367.68 00:30:53.837 clat percentiles (usec): 00:30:53.837 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:53.837 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:53.837 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:30:53.837 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:30:53.837 | 99.99th=[43254] 00:30:53.837 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=388.80, stdev=11.72, samples=20 00:30:53.837 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:53.837 lat (msec) : 50=100.00% 00:30:53.837 cpu : usr=96.58%, sys=3.21%, ctx=13, majf=0, minf=74 00:30:53.837 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.837 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.837 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.837 filename1: (groupid=0, jobs=1): err= 0: pid=3241724: Mon Jun 10 14:39:31 2024 00:30:53.837 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10039msec) 00:30:53.837 slat (nsec): min=8173, max=60310, avg=8571.15, stdev=1840.20 00:30:53.837 clat (usec): min=690, max=42109, avg=21110.87, stdev=20229.63 00:30:53.837 lat (usec): min=698, max=42117, avg=21119.44, stdev=20229.56 00:30:53.837 clat percentiles (usec): 00:30:53.837 | 1.00th=[ 750], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 922], 00:30:53.837 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 2089], 60.00th=[41157], 00:30:53.837 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:53.837 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:53.837 | 99.99th=[42206] 00:30:53.837 bw ( KiB/s): min= 704, max= 768, per=66.15%, avg=758.40, stdev=23.45, samples=20 00:30:53.837 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:30:53.837 lat (usec) : 750=0.89%, 1000=48.37% 00:30:53.837 lat (msec) : 2=0.63%, 4=0.21%, 50=49.89% 00:30:53.837 cpu : usr=96.56%, sys=3.22%, ctx=13, majf=0, minf=215 00:30:53.837 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.837 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.837 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.837 00:30:53.837 Run status group 0 (all jobs): 00:30:53.837 READ: bw=1146KiB/s (1173kB/s), 389KiB/s-757KiB/s (398kB/s-775kB/s), io=11.2MiB (11.8MB), run=10034-10039msec 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.837 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.837 00:30:53.837 real 0m11.343s 00:30:53.838 user 0m35.351s 00:30:53.838 sys 0m0.997s 00:30:53.838 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 ************************************ 00:30:53.838 END TEST fio_dif_1_multi_subsystems 00:30:53.838 ************************************ 00:30:53.838 14:39:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:53.838 14:39:31 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:53.838 14:39:31 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 ************************************ 00:30:53.838 START TEST fio_dif_rand_params 00:30:53.838 ************************************ 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 bdev_null0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.838 [2024-06-10 14:39:31.372801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.838 { 00:30:53.838 "params": { 00:30:53.838 "name": "Nvme$subsystem", 00:30:53.838 "trtype": "$TEST_TRANSPORT", 00:30:53.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.838 "adrfam": "ipv4", 00:30:53.838 "trsvcid": "$NVMF_PORT", 00:30:53.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.838 "hdgst": ${hdgst:-false}, 00:30:53.838 "ddgst": ${ddgst:-false} 00:30:53.838 }, 00:30:53.838 "method": "bdev_nvme_attach_controller" 00:30:53.838 } 00:30:53.838 EOF 00:30:53.838 )") 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.838 "params": { 00:30:53.838 "name": "Nvme0", 00:30:53.838 "trtype": "tcp", 00:30:53.838 "traddr": "10.0.0.2", 00:30:53.838 "adrfam": "ipv4", 00:30:53.838 "trsvcid": "4420", 00:30:53.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.838 "hdgst": false, 00:30:53.838 "ddgst": false 00:30:53.838 }, 00:30:53.838 "method": "bdev_nvme_attach_controller" 00:30:53.838 }' 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:53.838 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:54.121 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:54.121 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:54.121 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:54.121 14:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:54.388 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:54.388 ... 00:30:54.388 fio-3.35 00:30:54.388 Starting 3 threads 00:30:54.388 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.043 00:31:01.043 filename0: (groupid=0, jobs=1): err= 0: pid=3243920: Mon Jun 10 14:39:37 2024 00:31:01.043 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(141MiB/5006msec) 00:31:01.043 slat (nsec): min=8217, max=32020, avg=8993.31, stdev=1634.12 00:31:01.043 clat (usec): min=5139, max=92266, avg=13288.85, stdev=10306.50 00:31:01.043 lat (usec): min=5148, max=92275, avg=13297.84, stdev=10306.55 00:31:01.043 clat percentiles (usec): 00:31:01.043 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7570], 20.00th=[ 8356], 00:31:01.043 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[10945], 60.00th=[11469], 00:31:01.043 | 70.00th=[12256], 80.00th=[13829], 90.00th=[15270], 95.00th=[46924], 00:31:01.043 | 99.00th=[51643], 99.50th=[52691], 99.90th=[87557], 99.95th=[91751], 00:31:01.043 | 99.99th=[91751] 00:31:01.043 bw ( KiB/s): min=14080, max=37632, per=34.36%, avg=28825.60, stdev=6972.52, samples=10 00:31:01.043 iops : min= 110, max= 294, avg=225.20, stdev=54.47, samples=10 00:31:01.043 lat (msec) : 10=37.38%, 20=56.24%, 50=3.90%, 100=2.48% 00:31:01.043 cpu : usr=96.30%, sys=3.44%, ctx=15, majf=0, minf=74 00:31:01.043 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 issued rwts: total=1129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.043 filename0: (groupid=0, jobs=1): err= 0: pid=3243921: Mon Jun 10 14:39:37 2024 00:31:01.043 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(142MiB/5029msec) 00:31:01.043 slat (nsec): min=8215, max=36738, avg=9255.11, stdev=1543.66 00:31:01.043 clat (usec): min=4521, max=91579, avg=13232.13, stdev=13421.84 00:31:01.043 lat (usec): min=4530, max=91588, avg=13241.39, stdev=13421.82 00:31:01.043 clat percentiles (usec): 00:31:01.043 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7308], 00:31:01.043 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9896], 00:31:01.043 | 70.00th=[10552], 80.00th=[11207], 90.00th=[15401], 95.00th=[50070], 00:31:01.043 | 99.00th=[52691], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:31:01.043 | 99.99th=[91751] 00:31:01.043 bw ( KiB/s): min=21504, max=39936, per=34.67%, avg=29081.60, stdev=5819.21, samples=10 00:31:01.043 iops : min= 168, max= 312, avg=227.20, stdev=45.46, samples=10 00:31:01.043 lat (msec) : 10=61.19%, 20=28.88%, 50=5.36%, 100=4.57% 00:31:01.043 cpu : usr=96.98%, sys=2.70%, ctx=23, majf=0, minf=72 00:31:01.043 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.043 filename0: (groupid=0, jobs=1): err= 0: pid=3243922: Mon Jun 10 14:39:37 2024 00:31:01.043 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(130MiB/5046msec) 00:31:01.043 slat (nsec): min=8230, max=32230, avg=8960.92, stdev=1328.30 00:31:01.043 clat (usec): min=5826, max=90175, avg=14516.08, stdev=11773.98 00:31:01.043 lat (usec): min=5834, max=90183, avg=14525.05, stdev=11773.95 00:31:01.043 clat percentiles (usec): 00:31:01.043 | 1.00th=[ 6325], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8848], 00:31:01.043 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:31:01.043 | 70.00th=[13173], 80.00th=[14615], 90.00th=[16450], 95.00th=[48497], 00:31:01.043 | 99.00th=[52691], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:31:01.043 | 99.99th=[89654] 00:31:01.043 bw ( KiB/s): min=15616, max=32000, per=31.61%, avg=26521.60, stdev=5120.28, samples=10 00:31:01.043 iops : min= 122, max= 250, avg=207.20, stdev=40.00, samples=10 00:31:01.043 lat (msec) : 10=31.57%, 20=60.44%, 50=4.62%, 100=3.37% 00:31:01.043 cpu : usr=96.10%, sys=3.67%, ctx=6, majf=0, minf=126 00:31:01.043 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.043 issued rwts: total=1039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.043 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.043 00:31:01.043 Run status group 0 (all jobs): 00:31:01.043 READ: bw=81.9MiB/s (85.9MB/s), 25.7MiB/s-28.3MiB/s (27.0MB/s-29.7MB/s), io=413MiB (433MB), run=5006-5046msec 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 bdev_null0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.043 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 [2024-06-10 14:39:37.653739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 bdev_null1 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 bdev_null2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.044 { 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme$subsystem", 00:31:01.044 "trtype": "$TEST_TRANSPORT", 00:31:01.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "$NVMF_PORT", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.044 "hdgst": ${hdgst:-false}, 00:31:01.044 "ddgst": ${ddgst:-false} 00:31:01.044 }, 00:31:01.044 "method": "bdev_nvme_attach_controller" 00:31:01.044 } 00:31:01.044 EOF 00:31:01.044 )") 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.044 { 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme$subsystem", 00:31:01.044 "trtype": "$TEST_TRANSPORT", 00:31:01.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "$NVMF_PORT", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.044 "hdgst": ${hdgst:-false}, 00:31:01.044 "ddgst": ${ddgst:-false} 00:31:01.044 }, 00:31:01.044 "method": "bdev_nvme_attach_controller" 00:31:01.044 } 00:31:01.044 EOF 00:31:01.044 )") 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.044 { 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme$subsystem", 00:31:01.044 "trtype": "$TEST_TRANSPORT", 00:31:01.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "$NVMF_PORT", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.044 "hdgst": ${hdgst:-false}, 00:31:01.044 "ddgst": ${ddgst:-false} 00:31:01.044 }, 00:31:01.044 "method": "bdev_nvme_attach_controller" 00:31:01.044 } 00:31:01.044 EOF 00:31:01.044 )") 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:01.044 14:39:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme0", 00:31:01.044 "trtype": "tcp", 00:31:01.044 "traddr": "10.0.0.2", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "4420", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.044 "hdgst": false, 00:31:01.044 "ddgst": false 00:31:01.044 }, 00:31:01.044 "method": "bdev_nvme_attach_controller" 00:31:01.044 },{ 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme1", 00:31:01.044 "trtype": "tcp", 00:31:01.044 "traddr": "10.0.0.2", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "4420", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:01.044 "hdgst": false, 00:31:01.044 "ddgst": false 00:31:01.044 }, 00:31:01.044 "method": "bdev_nvme_attach_controller" 00:31:01.044 },{ 00:31:01.044 "params": { 00:31:01.044 "name": "Nvme2", 00:31:01.044 "trtype": "tcp", 00:31:01.044 "traddr": "10.0.0.2", 00:31:01.044 "adrfam": "ipv4", 00:31:01.044 "trsvcid": "4420", 00:31:01.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:01.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:01.045 "hdgst": false, 00:31:01.045 "ddgst": false 00:31:01.045 }, 00:31:01.045 "method": "bdev_nvme_attach_controller" 00:31:01.045 }' 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.045 14:39:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.045 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.045 ... 00:31:01.045 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.045 ... 00:31:01.045 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.045 ... 00:31:01.045 fio-3.35 00:31:01.045 Starting 24 threads 00:31:01.045 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.278 00:31:13.278 filename0: (groupid=0, jobs=1): err= 0: pid=3245425: Mon Jun 10 14:39:49 2024 00:31:13.278 read: IOPS=498, BW=1993KiB/s (2040kB/s)(19.5MiB/10021msec) 00:31:13.278 slat (usec): min=8, max=102, avg=28.79, stdev=15.84 00:31:13.278 clat (usec): min=25659, max=39709, avg=31837.49, stdev=647.78 00:31:13.278 lat (usec): min=25667, max=39735, avg=31866.28, stdev=648.53 00:31:13.278 clat percentiles (usec): 00:31:13.278 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.278 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.278 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.278 | 99.00th=[33162], 99.50th=[33817], 99.90th=[39584], 99.95th=[39584], 00:31:13.278 | 99.99th=[39584] 00:31:13.278 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=1993.89, stdev=65.19, samples=19 00:31:13.278 iops : min= 479, max= 512, avg=498.47, stdev=16.30, samples=19 00:31:13.278 lat (msec) : 50=100.00% 00:31:13.278 cpu : usr=98.89%, sys=0.75%, ctx=128, majf=0, minf=22 00:31:13.278 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.278 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.278 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.278 filename0: (groupid=0, jobs=1): err= 0: pid=3245426: Mon Jun 10 14:39:49 2024 00:31:13.278 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10001msec) 00:31:13.278 slat (nsec): min=5780, max=75993, avg=25749.62, stdev=14243.71 00:31:13.278 clat (usec): min=19853, max=57949, avg=31902.18, stdev=1791.00 00:31:13.278 lat (usec): min=19910, max=57966, avg=31927.93, stdev=1790.53 00:31:13.278 clat percentiles (usec): 00:31:13.278 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:13.278 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.278 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.278 | 99.00th=[33817], 99.50th=[43254], 99.90th=[57934], 99.95th=[57934], 00:31:13.278 | 99.99th=[57934] 00:31:13.278 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.11, stdev=75.18, samples=19 00:31:13.278 iops : min= 448, max= 512, avg=496.74, stdev=18.76, samples=19 00:31:13.278 lat (msec) : 20=0.10%, 50=99.58%, 100=0.32% 00:31:13.278 cpu : usr=99.15%, sys=0.53%, ctx=69, majf=0, minf=23 00:31:13.279 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245427: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:31:13.279 slat (nsec): min=8241, max=75136, avg=10075.12, stdev=4160.58 00:31:13.279 clat (usec): min=8004, max=44769, avg=31833.81, stdev=1883.28 00:31:13.279 lat (usec): min=8025, max=44777, avg=31843.89, stdev=1882.51 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[20841], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:13.279 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.279 | 99.00th=[32900], 99.50th=[33424], 99.90th=[41681], 99.95th=[42730], 00:31:13.279 | 99.99th=[44827] 00:31:13.279 bw ( KiB/s): min= 1916, max= 2176, per=4.18%, avg=2003.00, stdev=75.39, samples=20 00:31:13.279 iops : min= 479, max= 544, avg=500.75, stdev=18.85, samples=20 00:31:13.279 lat (msec) : 10=0.32%, 20=0.64%, 50=99.04% 00:31:13.279 cpu : usr=98.88%, sys=0.74%, ctx=82, majf=0, minf=22 00:31:13.279 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245428: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10024msec) 00:31:13.279 slat (nsec): min=7471, max=91811, avg=30407.69, stdev=16500.35 00:31:13.279 clat (usec): min=25620, max=42775, avg=31844.05, stdev=774.86 00:31:13.279 lat (usec): min=25670, max=42794, avg=31874.46, stdev=774.97 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.279 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.279 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.279 | 99.00th=[32900], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:31:13.279 | 99.99th=[42730] 00:31:13.279 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1990.30, stdev=64.94, samples=20 00:31:13.279 iops : min= 480, max= 512, avg=497.50, stdev=16.25, samples=20 00:31:13.279 lat (msec) : 50=100.00% 00:31:13.279 cpu : usr=99.14%, sys=0.58%, ctx=11, majf=0, minf=20 00:31:13.279 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245429: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:31:13.279 slat (nsec): min=4965, max=88215, avg=29008.37, stdev=15179.81 00:31:13.279 clat (usec): min=22895, max=61206, avg=31889.06, stdev=1351.13 00:31:13.279 lat (usec): min=22904, max=61219, avg=31918.07, stdev=1350.57 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.279 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.279 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.279 | 99.00th=[33424], 99.50th=[33817], 99.90th=[51643], 99.95th=[51643], 00:31:13.279 | 99.99th=[61080] 00:31:13.279 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.11, stdev=78.10, samples=19 00:31:13.279 iops : min= 448, max= 512, avg=496.74, stdev=19.50, samples=19 00:31:13.279 lat (msec) : 50=99.68%, 100=0.32% 00:31:13.279 cpu : usr=98.65%, sys=0.82%, ctx=56, majf=0, minf=25 00:31:13.279 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245430: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10003msec) 00:31:13.279 slat (nsec): min=8229, max=74847, avg=16221.34, stdev=10194.20 00:31:13.279 clat (usec): min=2639, max=34648, avg=31346.55, stdev=3941.85 00:31:13.279 lat (usec): min=2656, max=34657, avg=31362.77, stdev=3941.31 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[ 4228], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:13.279 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.279 | 99.00th=[32900], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:31:13.279 | 99.99th=[34866] 00:31:13.279 bw ( KiB/s): min= 1920, max= 2792, per=4.24%, avg=2033.16, stdev=194.32, samples=19 00:31:13.279 iops : min= 480, max= 698, avg=508.21, stdev=48.60, samples=19 00:31:13.279 lat (msec) : 4=0.94%, 10=0.88%, 20=0.31%, 50=97.86% 00:31:13.279 cpu : usr=98.99%, sys=0.73%, ctx=7, majf=0, minf=26 00:31:13.279 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=5085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245431: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10022msec) 00:31:13.279 slat (usec): min=6, max=154, avg=19.03, stdev=10.68 00:31:13.279 clat (usec): min=22770, max=44320, avg=31965.36, stdev=958.15 00:31:13.279 lat (usec): min=22787, max=44337, avg=31984.39, stdev=957.28 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:13.279 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.279 | 99.00th=[33162], 99.50th=[33817], 99.90th=[44303], 99.95th=[44303], 00:31:13.279 | 99.99th=[44303] 00:31:13.279 bw ( KiB/s): min= 1904, max= 2048, per=4.15%, avg=1989.40, stdev=66.56, samples=20 00:31:13.279 iops : min= 476, max= 512, avg=497.35, stdev=16.64, samples=20 00:31:13.279 lat (msec) : 50=100.00% 00:31:13.279 cpu : usr=99.15%, sys=0.53%, ctx=47, majf=0, minf=26 00:31:13.279 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename0: (groupid=0, jobs=1): err= 0: pid=3245432: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10006msec) 00:31:13.279 slat (nsec): min=5716, max=67362, avg=20687.74, stdev=10648.15 00:31:13.279 clat (usec): min=6158, max=57377, avg=31910.70, stdev=2336.76 00:31:13.279 lat (usec): min=6166, max=57393, avg=31931.39, stdev=2336.88 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.279 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.279 | 99.00th=[34341], 99.50th=[42730], 99.90th=[57410], 99.95th=[57410], 00:31:13.279 | 99.99th=[57410] 00:31:13.279 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.11, stdev=76.81, samples=19 00:31:13.279 iops : min= 448, max= 512, avg=496.74, stdev=19.17, samples=19 00:31:13.279 lat (msec) : 10=0.28%, 50=99.40%, 100=0.32% 00:31:13.279 cpu : usr=99.06%, sys=0.63%, ctx=63, majf=0, minf=27 00:31:13.279 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename1: (groupid=0, jobs=1): err= 0: pid=3245433: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10019msec) 00:31:13.279 slat (nsec): min=8287, max=65883, avg=18916.23, stdev=10383.87 00:31:13.279 clat (usec): min=18534, max=49758, avg=31954.74, stdev=1381.71 00:31:13.279 lat (usec): min=18544, max=49791, avg=31973.66, stdev=1381.53 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:13.279 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.279 | 99.00th=[32900], 99.50th=[33817], 99.90th=[49546], 99.95th=[49546], 00:31:13.279 | 99.99th=[49546] 00:31:13.279 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1990.55, stdev=65.17, samples=20 00:31:13.279 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:31:13.279 lat (msec) : 20=0.32%, 50=99.68% 00:31:13.279 cpu : usr=98.67%, sys=0.90%, ctx=139, majf=0, minf=26 00:31:13.279 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:13.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.279 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.279 filename1: (groupid=0, jobs=1): err= 0: pid=3245434: Mon Jun 10 14:39:49 2024 00:31:13.279 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10006msec) 00:31:13.279 slat (nsec): min=7881, max=87440, avg=21055.21, stdev=11681.87 00:31:13.279 clat (usec): min=8639, max=41742, avg=31797.42, stdev=1734.13 00:31:13.279 lat (usec): min=8647, max=41760, avg=31818.47, stdev=1734.75 00:31:13.279 clat percentiles (usec): 00:31:13.279 | 1.00th=[22938], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.279 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.280 | 99.00th=[33162], 99.50th=[34341], 99.90th=[41681], 99.95th=[41681], 00:31:13.280 | 99.99th=[41681] 00:31:13.280 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=2000.58, stdev=63.24, samples=19 00:31:13.280 iops : min= 480, max= 512, avg=500.11, stdev=15.78, samples=19 00:31:13.280 lat (msec) : 10=0.32%, 20=0.32%, 50=99.36% 00:31:13.280 cpu : usr=99.10%, sys=0.60%, ctx=19, majf=0, minf=26 00:31:13.280 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245435: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10015msec) 00:31:13.280 slat (nsec): min=7129, max=69615, avg=21863.17, stdev=13113.31 00:31:13.280 clat (usec): min=12941, max=44888, avg=31868.68, stdev=1284.06 00:31:13.280 lat (usec): min=12951, max=44907, avg=31890.54, stdev=1284.76 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:13.280 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.280 | 99.00th=[32900], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:31:13.280 | 99.99th=[44827] 00:31:13.280 bw ( KiB/s): min= 1920, max= 2104, per=4.16%, avg=1993.10, stdev=68.66, samples=20 00:31:13.280 iops : min= 480, max= 526, avg=498.20, stdev=17.18, samples=20 00:31:13.280 lat (msec) : 20=0.32%, 50=99.68% 00:31:13.280 cpu : usr=99.20%, sys=0.51%, ctx=7, majf=0, minf=24 00:31:13.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245436: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10020msec) 00:31:13.280 slat (nsec): min=6090, max=79116, avg=23171.28, stdev=13198.11 00:31:13.280 clat (usec): min=22791, max=42959, avg=31911.13, stdev=904.98 00:31:13.280 lat (usec): min=22806, max=42975, avg=31934.30, stdev=904.36 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:13.280 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.280 | 99.00th=[32900], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:31:13.280 | 99.99th=[42730] 00:31:13.280 bw ( KiB/s): min= 1904, max= 2048, per=4.15%, avg=1989.55, stdev=66.40, samples=20 00:31:13.280 iops : min= 476, max= 512, avg=497.35, stdev=16.64, samples=20 00:31:13.280 lat (msec) : 50=100.00% 00:31:13.280 cpu : usr=99.03%, sys=0.60%, ctx=68, majf=0, minf=25 00:31:13.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245437: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10019msec) 00:31:13.280 slat (nsec): min=5532, max=79468, avg=23602.45, stdev=13271.79 00:31:13.280 clat (usec): min=22819, max=42274, avg=31878.78, stdev=879.18 00:31:13.280 lat (usec): min=22828, max=42289, avg=31902.38, stdev=879.32 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:13.280 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.280 | 99.00th=[32900], 99.50th=[33817], 99.90th=[42206], 99.95th=[42206], 00:31:13.280 | 99.99th=[42206] 00:31:13.280 bw ( KiB/s): min= 1912, max= 2048, per=4.15%, avg=1989.80, stdev=66.04, samples=20 00:31:13.280 iops : min= 478, max= 512, avg=497.45, stdev=16.51, samples=20 00:31:13.280 lat (msec) : 50=100.00% 00:31:13.280 cpu : usr=98.92%, sys=0.70%, ctx=88, majf=0, minf=29 00:31:13.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245438: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=507, BW=2030KiB/s (2079kB/s)(19.9MiB/10024msec) 00:31:13.280 slat (nsec): min=8217, max=67306, avg=9644.14, stdev=2593.14 00:31:13.280 clat (usec): min=2804, max=34024, avg=31435.13, stdev=3852.39 00:31:13.280 lat (usec): min=2822, max=34033, avg=31444.78, stdev=3850.89 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[ 3687], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.280 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:13.280 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.280 | 99.00th=[32900], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:31:13.280 | 99.99th=[33817] 00:31:13.280 bw ( KiB/s): min= 1916, max= 2688, per=4.23%, avg=2028.80, stdev=167.70, samples=20 00:31:13.280 iops : min= 479, max= 672, avg=507.20, stdev=41.93, samples=20 00:31:13.280 lat (msec) : 4=1.22%, 10=0.49%, 20=0.49%, 50=97.80% 00:31:13.280 cpu : usr=99.16%, sys=0.54%, ctx=25, majf=0, minf=55 00:31:13.280 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245439: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10024msec) 00:31:13.280 slat (usec): min=8, max=105, avg=33.07, stdev=18.72 00:31:13.280 clat (usec): min=25465, max=42880, avg=31811.35, stdev=790.80 00:31:13.280 lat (usec): min=25475, max=42903, avg=31844.42, stdev=791.35 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:31:13.280 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.280 | 99.00th=[33162], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:31:13.280 | 99.99th=[42730] 00:31:13.280 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1990.30, stdev=64.94, samples=20 00:31:13.280 iops : min= 480, max= 512, avg=497.50, stdev=16.25, samples=20 00:31:13.280 lat (msec) : 50=100.00% 00:31:13.280 cpu : usr=98.54%, sys=0.86%, ctx=115, majf=0, minf=25 00:31:13.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename1: (groupid=0, jobs=1): err= 0: pid=3245440: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10001msec) 00:31:13.280 slat (nsec): min=5560, max=76143, avg=26324.34, stdev=13713.70 00:31:13.280 clat (usec): min=20080, max=58243, avg=31922.52, stdev=1666.96 00:31:13.280 lat (usec): min=20104, max=58259, avg=31948.84, stdev=1665.99 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:13.280 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.280 | 99.00th=[33162], 99.50th=[34341], 99.90th=[58459], 99.95th=[58459], 00:31:13.280 | 99.99th=[58459] 00:31:13.280 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1987.32, stdev=78.09, samples=19 00:31:13.280 iops : min= 448, max= 512, avg=496.79, stdev=19.63, samples=19 00:31:13.280 lat (msec) : 50=99.68%, 100=0.32% 00:31:13.280 cpu : usr=99.03%, sys=0.62%, ctx=37, majf=0, minf=23 00:31:13.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.280 filename2: (groupid=0, jobs=1): err= 0: pid=3245441: Mon Jun 10 14:39:49 2024 00:31:13.280 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10002msec) 00:31:13.280 slat (nsec): min=5282, max=87559, avg=28641.78, stdev=14967.76 00:31:13.280 clat (usec): min=20023, max=68402, avg=31905.12, stdev=1849.34 00:31:13.280 lat (usec): min=20032, max=68417, avg=31933.76, stdev=1848.49 00:31:13.280 clat percentiles (usec): 00:31:13.280 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.280 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.280 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.280 | 99.00th=[33424], 99.50th=[34341], 99.90th=[58983], 99.95th=[58983], 00:31:13.280 | 99.99th=[68682] 00:31:13.280 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.16, stdev=78.50, samples=19 00:31:13.280 iops : min= 448, max= 512, avg=496.79, stdev=19.63, samples=19 00:31:13.280 lat (msec) : 50=99.68%, 100=0.32% 00:31:13.280 cpu : usr=99.02%, sys=0.62%, ctx=59, majf=0, minf=23 00:31:13.280 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:13.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.280 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245442: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=501, BW=2004KiB/s (2053kB/s)(19.6MiB/10006msec) 00:31:13.281 slat (nsec): min=8231, max=89136, avg=26871.17, stdev=16974.52 00:31:13.281 clat (usec): min=13676, max=47578, avg=31704.81, stdev=1750.62 00:31:13.281 lat (usec): min=13691, max=47610, avg=31731.68, stdev=1752.04 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[21890], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.281 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.281 | 99.00th=[33162], 99.50th=[33817], 99.90th=[47449], 99.95th=[47449], 00:31:13.281 | 99.99th=[47449] 00:31:13.281 bw ( KiB/s): min= 1920, max= 2096, per=4.18%, avg=2003.11, stdev=66.13, samples=19 00:31:13.281 iops : min= 480, max= 524, avg=500.74, stdev=16.51, samples=19 00:31:13.281 lat (msec) : 20=0.30%, 50=99.70% 00:31:13.281 cpu : usr=99.06%, sys=0.64%, ctx=23, majf=0, minf=28 00:31:13.281 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=5014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245443: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=498, BW=1996KiB/s (2043kB/s)(19.5MiB/10006msec) 00:31:13.281 slat (nsec): min=6233, max=77676, avg=25385.59, stdev=13786.90 00:31:13.281 clat (usec): min=5854, max=57179, avg=31820.81, stdev=2174.68 00:31:13.281 lat (usec): min=5862, max=57195, avg=31846.19, stdev=2175.03 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.281 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.281 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:13.281 | 99.00th=[33162], 99.50th=[34341], 99.90th=[56886], 99.95th=[57410], 00:31:13.281 | 99.99th=[57410] 00:31:13.281 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.11, stdev=78.10, samples=19 00:31:13.281 iops : min= 448, max= 512, avg=496.74, stdev=19.50, samples=19 00:31:13.281 lat (msec) : 10=0.32%, 50=99.36%, 100=0.32% 00:31:13.281 cpu : usr=99.08%, sys=0.61%, ctx=50, majf=0, minf=31 00:31:13.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245444: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:31:13.281 slat (nsec): min=6892, max=80141, avg=16915.25, stdev=11650.58 00:31:13.281 clat (usec): min=20105, max=45939, avg=31956.59, stdev=951.54 00:31:13.281 lat (usec): min=20114, max=45958, avg=31973.50, stdev=950.92 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.281 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:13.281 | 99.00th=[33817], 99.50th=[34341], 99.90th=[42206], 99.95th=[43254], 00:31:13.281 | 99.99th=[45876] 00:31:13.281 bw ( KiB/s): min= 1916, max= 2048, per=4.16%, avg=1993.89, stdev=65.19, samples=19 00:31:13.281 iops : min= 479, max= 512, avg=498.47, stdev=16.30, samples=19 00:31:13.281 lat (msec) : 50=100.00% 00:31:13.281 cpu : usr=97.81%, sys=1.35%, ctx=646, majf=0, minf=27 00:31:13.281 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245445: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10024msec) 00:31:13.281 slat (nsec): min=7246, max=93404, avg=27601.03, stdev=17490.67 00:31:13.281 clat (usec): min=25826, max=42708, avg=31899.17, stdev=773.02 00:31:13.281 lat (usec): min=25836, max=42736, avg=31926.77, stdev=771.75 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:13.281 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:13.281 | 99.00th=[32900], 99.50th=[33817], 99.90th=[42730], 99.95th=[42730], 00:31:13.281 | 99.99th=[42730] 00:31:13.281 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1990.30, stdev=64.94, samples=20 00:31:13.281 iops : min= 480, max= 512, avg=497.50, stdev=16.25, samples=20 00:31:13.281 lat (msec) : 50=100.00% 00:31:13.281 cpu : usr=99.05%, sys=0.59%, ctx=57, majf=0, minf=26 00:31:13.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245446: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10018msec) 00:31:13.281 slat (nsec): min=7348, max=69348, avg=20618.89, stdev=10327.12 00:31:13.281 clat (usec): min=22853, max=40430, avg=31926.44, stdev=810.23 00:31:13.281 lat (usec): min=22870, max=40450, avg=31947.06, stdev=809.85 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:13.281 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:13.281 | 99.00th=[32900], 99.50th=[33817], 99.90th=[40633], 99.95th=[40633], 00:31:13.281 | 99.99th=[40633] 00:31:13.281 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1990.00, stdev=65.80, samples=20 00:31:13.281 iops : min= 479, max= 512, avg=497.50, stdev=16.45, samples=20 00:31:13.281 lat (msec) : 50=100.00% 00:31:13.281 cpu : usr=98.76%, sys=0.72%, ctx=47, majf=0, minf=23 00:31:13.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245447: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=505, BW=2024KiB/s (2072kB/s)(19.8MiB/10005msec) 00:31:13.281 slat (nsec): min=6160, max=82834, avg=18279.71, stdev=13429.06 00:31:13.281 clat (usec): min=7840, max=73068, avg=31525.25, stdev=4031.33 00:31:13.281 lat (usec): min=7849, max=73084, avg=31543.53, stdev=4029.89 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[21890], 5.00th=[25822], 10.00th=[26608], 20.00th=[27919], 00:31:13.281 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[36963], 95.00th=[37487], 00:31:13.281 | 99.00th=[39584], 99.50th=[47449], 99.90th=[56886], 99.95th=[57410], 00:31:13.281 | 99.99th=[72877] 00:31:13.281 bw ( KiB/s): min= 1808, max= 2096, per=4.21%, avg=2016.58, stdev=61.09, samples=19 00:31:13.281 iops : min= 452, max= 524, avg=504.11, stdev=15.27, samples=19 00:31:13.281 lat (msec) : 10=0.32%, 20=0.16%, 50=99.21%, 100=0.32% 00:31:13.281 cpu : usr=98.79%, sys=0.69%, ctx=47, majf=0, minf=26 00:31:13.281 IO depths : 1=0.9%, 2=2.1%, 4=6.6%, 8=75.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=89.9%, 8=7.6%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=5062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 filename2: (groupid=0, jobs=1): err= 0: pid=3245448: Mon Jun 10 14:39:49 2024 00:31:13.281 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10007msec) 00:31:13.281 slat (nsec): min=5926, max=78920, avg=20199.12, stdev=11076.61 00:31:13.281 clat (usec): min=6372, max=57769, avg=31874.42, stdev=2805.03 00:31:13.281 lat (usec): min=6381, max=57786, avg=31894.62, stdev=2804.55 00:31:13.281 clat percentiles (usec): 00:31:13.281 | 1.00th=[25297], 5.00th=[27395], 10.00th=[31589], 20.00th=[31589], 00:31:13.281 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:13.281 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[33817], 00:31:13.281 | 99.00th=[38536], 99.50th=[46924], 99.90th=[57934], 99.95th=[57934], 00:31:13.281 | 99.99th=[57934] 00:31:13.281 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1987.95, stdev=72.29, samples=19 00:31:13.281 iops : min= 448, max= 512, avg=496.95, stdev=18.04, samples=19 00:31:13.281 lat (msec) : 10=0.20%, 20=0.12%, 50=99.36%, 100=0.32% 00:31:13.281 cpu : usr=98.99%, sys=0.71%, ctx=7, majf=0, minf=41 00:31:13.281 IO depths : 1=4.5%, 2=9.2%, 4=19.2%, 8=58.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:13.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.281 issued rwts: total=4996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.281 00:31:13.281 Run status group 0 (all jobs): 00:31:13.281 READ: bw=46.8MiB/s (49.1MB/s), 1990KiB/s-2033KiB/s (2038kB/s-2082kB/s), io=469MiB (492MB), run=10001-10024msec 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.281 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 bdev_null0 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 [2024-06-10 14:39:49.266848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 bdev_null1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:13.282 { 00:31:13.282 "params": { 00:31:13.282 "name": "Nvme$subsystem", 00:31:13.282 "trtype": "$TEST_TRANSPORT", 00:31:13.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.282 "adrfam": "ipv4", 00:31:13.282 "trsvcid": "$NVMF_PORT", 00:31:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.282 "hdgst": ${hdgst:-false}, 00:31:13.282 "ddgst": ${ddgst:-false} 00:31:13.282 }, 00:31:13.282 "method": "bdev_nvme_attach_controller" 00:31:13.282 } 00:31:13.282 EOF 00:31:13.282 )") 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:13.282 { 00:31:13.282 "params": { 00:31:13.282 "name": "Nvme$subsystem", 00:31:13.282 "trtype": "$TEST_TRANSPORT", 00:31:13.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.282 "adrfam": "ipv4", 00:31:13.282 "trsvcid": "$NVMF_PORT", 00:31:13.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.282 "hdgst": ${hdgst:-false}, 00:31:13.282 "ddgst": ${ddgst:-false} 00:31:13.282 }, 00:31:13.282 "method": "bdev_nvme_attach_controller" 00:31:13.282 } 00:31:13.282 EOF 00:31:13.282 )") 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:13.282 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:13.283 "params": { 00:31:13.283 "name": "Nvme0", 00:31:13.283 "trtype": "tcp", 00:31:13.283 "traddr": "10.0.0.2", 00:31:13.283 "adrfam": "ipv4", 00:31:13.283 "trsvcid": "4420", 00:31:13.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.283 "hdgst": false, 00:31:13.283 "ddgst": false 00:31:13.283 }, 00:31:13.283 "method": "bdev_nvme_attach_controller" 00:31:13.283 },{ 00:31:13.283 "params": { 00:31:13.283 "name": "Nvme1", 00:31:13.283 "trtype": "tcp", 00:31:13.283 "traddr": "10.0.0.2", 00:31:13.283 "adrfam": "ipv4", 00:31:13.283 "trsvcid": "4420", 00:31:13.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:13.283 "hdgst": false, 00:31:13.283 "ddgst": false 00:31:13.283 }, 00:31:13.283 "method": "bdev_nvme_attach_controller" 00:31:13.283 }' 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.283 14:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.283 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:13.283 ... 00:31:13.283 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:13.283 ... 00:31:13.283 fio-3.35 00:31:13.283 Starting 4 threads 00:31:13.283 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.590 00:31:18.590 filename0: (groupid=0, jobs=1): err= 0: pid=3247649: Mon Jun 10 14:39:55 2024 00:31:18.590 read: IOPS=2032, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5001msec) 00:31:18.590 slat (nsec): min=8180, max=68220, avg=8869.30, stdev=2273.63 00:31:18.590 clat (usec): min=1045, max=6510, avg=3912.20, stdev=683.17 00:31:18.590 lat (usec): min=1053, max=6518, avg=3921.07, stdev=682.98 00:31:18.590 clat percentiles (usec): 00:31:18.590 | 1.00th=[ 3130], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3490], 00:31:18.590 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:31:18.590 | 70.00th=[ 3752], 80.00th=[ 4015], 90.00th=[ 5342], 95.00th=[ 5604], 00:31:18.590 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6063], 99.95th=[ 6194], 00:31:18.590 | 99.99th=[ 6521] 00:31:18.590 bw ( KiB/s): min=15952, max=16576, per=24.29%, avg=16257.70, stdev=187.28, samples=10 00:31:18.590 iops : min= 1994, max= 2072, avg=2032.20, stdev=23.39, samples=10 00:31:18.590 lat (msec) : 2=0.13%, 4=79.84%, 10=20.03% 00:31:18.590 cpu : usr=97.82%, sys=1.94%, ctx=6, majf=0, minf=9 00:31:18.590 IO depths : 1=0.1%, 2=0.1%, 4=73.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 issued rwts: total=10164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.590 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.590 filename0: (groupid=0, jobs=1): err= 0: pid=3247651: Mon Jun 10 14:39:55 2024 00:31:18.590 read: IOPS=2199, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5003msec) 00:31:18.590 slat (nsec): min=8190, max=46005, avg=9524.31, stdev=3702.19 00:31:18.590 clat (usec): min=1189, max=6191, avg=3615.34, stdev=493.25 00:31:18.590 lat (usec): min=1214, max=6200, avg=3624.87, stdev=493.17 00:31:18.590 clat percentiles (usec): 00:31:18.590 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3359], 00:31:18.590 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3654], 00:31:18.590 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3818], 95.00th=[ 4752], 00:31:18.590 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5669], 99.95th=[ 5735], 00:31:18.590 | 99.99th=[ 6194] 00:31:18.590 bw ( KiB/s): min=16720, max=18368, per=26.29%, avg=17595.20, stdev=514.83, samples=10 00:31:18.590 iops : min= 2090, max= 2296, avg=2199.40, stdev=64.35, samples=10 00:31:18.590 lat (msec) : 2=0.25%, 4=92.29%, 10=7.46% 00:31:18.590 cpu : usr=97.06%, sys=2.60%, ctx=54, majf=0, minf=1 00:31:18.590 IO depths : 1=0.1%, 2=0.5%, 4=66.2%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 issued rwts: total=11002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.590 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.590 filename1: (groupid=0, jobs=1): err= 0: pid=3247652: Mon Jun 10 14:39:55 2024 00:31:18.590 read: IOPS=2033, BW=15.9MiB/s (16.7MB/s)(80.1MiB/5042msec) 00:31:18.590 slat (nsec): min=8185, max=45611, avg=9152.11, stdev=2444.22 00:31:18.590 clat (usec): min=1580, max=41695, avg=3889.83, stdev=917.91 00:31:18.590 lat (usec): min=1588, max=41704, avg=3898.98, stdev=917.79 00:31:18.590 clat percentiles (usec): 00:31:18.590 | 1.00th=[ 3097], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3523], 00:31:18.590 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:31:18.590 | 70.00th=[ 3752], 80.00th=[ 3916], 90.00th=[ 5211], 95.00th=[ 5407], 00:31:18.590 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6194], 99.95th=[ 6194], 00:31:18.590 | 99.99th=[41681] 00:31:18.590 bw ( KiB/s): min=15904, max=16928, per=24.50%, avg=16396.80, stdev=339.56, samples=10 00:31:18.590 iops : min= 1988, max= 2116, avg=2049.60, stdev=42.45, samples=10 00:31:18.590 lat (msec) : 2=0.11%, 4=81.25%, 10=18.61%, 50=0.03% 00:31:18.590 cpu : usr=97.48%, sys=2.24%, ctx=7, majf=0, minf=9 00:31:18.590 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.590 issued rwts: total=10251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.590 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.590 filename1: (groupid=0, jobs=1): err= 0: pid=3247653: Mon Jun 10 14:39:55 2024 00:31:18.590 read: IOPS=2150, BW=16.8MiB/s (17.6MB/s)(84.0MiB/5003msec) 00:31:18.590 slat (nsec): min=8186, max=39987, avg=9395.04, stdev=3414.78 00:31:18.590 clat (usec): min=2500, max=46590, avg=3700.02, stdev=1210.15 00:31:18.590 lat (usec): min=2508, max=46623, avg=3709.41, stdev=1210.28 00:31:18.590 clat percentiles (usec): 00:31:18.590 | 1.00th=[ 3064], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3490], 00:31:18.590 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3752], 00:31:18.590 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 3851], 95.00th=[ 4113], 00:31:18.590 | 99.00th=[ 5211], 99.50th=[ 5407], 99.90th=[ 5669], 99.95th=[46400], 00:31:18.590 | 99.99th=[46400] 00:31:18.591 bw ( KiB/s): min=15134, max=17744, per=25.71%, avg=17206.20, stdev=743.70, samples=10 00:31:18.591 iops : min= 1891, max= 2218, avg=2150.70, stdev=93.20, samples=10 00:31:18.591 lat (msec) : 4=93.16%, 10=6.77%, 50=0.07% 00:31:18.591 cpu : usr=97.52%, sys=2.22%, ctx=3, majf=0, minf=0 00:31:18.591 IO depths : 1=0.1%, 2=0.1%, 4=63.7%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.591 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.591 issued rwts: total=10758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.591 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.591 00:31:18.591 Run status group 0 (all jobs): 00:31:18.591 READ: bw=65.3MiB/s (68.5MB/s), 15.9MiB/s-17.2MiB/s (16.6MB/s-18.0MB/s), io=329MiB (345MB), run=5001-5042msec 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 00:31:18.591 real 0m24.336s 00:31:18.591 user 5m22.023s 00:31:18.591 sys 0m3.584s 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 ************************************ 00:31:18.591 END TEST fio_dif_rand_params 00:31:18.591 ************************************ 00:31:18.591 14:39:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:18.591 14:39:55 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:18.591 14:39:55 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 ************************************ 00:31:18.591 START TEST fio_dif_digest 00:31:18.591 ************************************ 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 bdev_null0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.591 [2024-06-10 14:39:55.786626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.591 { 00:31:18.591 "params": { 00:31:18.591 "name": "Nvme$subsystem", 00:31:18.591 "trtype": "$TEST_TRANSPORT", 00:31:18.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.591 "adrfam": "ipv4", 00:31:18.591 "trsvcid": "$NVMF_PORT", 00:31:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.591 "hdgst": ${hdgst:-false}, 00:31:18.591 "ddgst": ${ddgst:-false} 00:31:18.591 }, 00:31:18.591 "method": "bdev_nvme_attach_controller" 00:31:18.591 } 00:31:18.591 EOF 00:31:18.591 )") 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:18.591 "params": { 00:31:18.591 "name": "Nvme0", 00:31:18.591 "trtype": "tcp", 00:31:18.591 "traddr": "10.0.0.2", 00:31:18.591 "adrfam": "ipv4", 00:31:18.591 "trsvcid": "4420", 00:31:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.591 "hdgst": true, 00:31:18.591 "ddgst": true 00:31:18.591 }, 00:31:18.591 "method": "bdev_nvme_attach_controller" 00:31:18.591 }' 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.591 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:18.592 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:18.592 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:18.592 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:18.592 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:18.592 14:39:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.850 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:18.850 ... 00:31:18.850 fio-3.35 00:31:18.850 Starting 3 threads 00:31:18.851 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.078 00:31:31.078 filename0: (groupid=0, jobs=1): err= 0: pid=3249146: Mon Jun 10 14:40:06 2024 00:31:31.078 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(320MiB/10049msec) 00:31:31.078 slat (nsec): min=8470, max=33556, avg=9264.36, stdev=1212.07 00:31:31.078 clat (usec): min=7852, max=48760, avg=11751.42, stdev=2006.90 00:31:31.078 lat (usec): min=7861, max=48768, avg=11760.68, stdev=2006.96 00:31:31.078 clat percentiles (usec): 00:31:31.078 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:31:31.078 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12125], 60.00th=[12649], 00:31:31.078 | 70.00th=[13042], 80.00th=[13566], 90.00th=[13960], 95.00th=[14353], 00:31:31.078 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:31:31.078 | 99.99th=[49021] 00:31:31.078 bw ( KiB/s): min=29696, max=34816, per=40.08%, avg=32675.35, stdev=1303.66, samples=20 00:31:31.078 iops : min= 232, max= 272, avg=255.25, stdev=10.23, samples=20 00:31:31.078 lat (msec) : 10=25.35%, 20=74.61%, 50=0.04% 00:31:31.078 cpu : usr=95.34%, sys=4.43%, ctx=18, majf=0, minf=156 00:31:31.078 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.078 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.078 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.078 filename0: (groupid=0, jobs=1): err= 0: pid=3249147: Mon Jun 10 14:40:06 2024 00:31:31.078 read: IOPS=137, BW=17.2MiB/s (18.1MB/s)(173MiB/10033msec) 00:31:31.078 slat (nsec): min=8464, max=32997, avg=9321.33, stdev=1231.98 00:31:31.078 clat (usec): min=9081, max=97476, avg=21734.39, stdev=17224.26 00:31:31.078 lat (usec): min=9089, max=97485, avg=21743.71, stdev=17224.25 00:31:31.078 clat percentiles (usec): 00:31:31.078 | 1.00th=[10683], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:31:31.078 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:31:31.078 | 70.00th=[15270], 80.00th=[16188], 90.00th=[54264], 95.00th=[55313], 00:31:31.078 | 99.00th=[93848], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:31:31.078 | 99.99th=[96994] 00:31:31.078 bw ( KiB/s): min=11520, max=25088, per=21.68%, avg=17676.80, stdev=3085.98, samples=20 00:31:31.078 iops : min= 90, max= 196, avg=138.10, stdev=24.11, samples=20 00:31:31.078 lat (msec) : 10=0.58%, 20=82.01%, 50=0.07%, 100=17.34% 00:31:31.078 cpu : usr=96.73%, sys=3.04%, ctx=20, majf=0, minf=109 00:31:31.078 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.079 issued rwts: total=1384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.079 filename0: (groupid=0, jobs=1): err= 0: pid=3249148: Mon Jun 10 14:40:06 2024 00:31:31.079 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(308MiB/10047msec) 00:31:31.079 slat (nsec): min=8484, max=32866, avg=9259.93, stdev=1126.04 00:31:31.079 clat (usec): min=8201, max=55794, avg=12223.81, stdev=2648.41 00:31:31.079 lat (usec): min=8210, max=55803, avg=12233.07, stdev=2648.47 00:31:31.079 clat percentiles (usec): 00:31:31.079 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:31:31.079 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12518], 60.00th=[13042], 00:31:31.079 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:31:31.079 | 99.00th=[15926], 99.50th=[16450], 99.90th=[55313], 99.95th=[55313], 00:31:31.079 | 99.99th=[55837] 00:31:31.079 bw ( KiB/s): min=28672, max=34304, per=38.59%, avg=31462.40, stdev=1452.67, samples=20 00:31:31.079 iops : min= 224, max= 268, avg=245.80, stdev=11.35, samples=20 00:31:31.079 lat (msec) : 10=18.58%, 20=81.22%, 50=0.08%, 100=0.12% 00:31:31.079 cpu : usr=95.45%, sys=4.30%, ctx=15, majf=0, minf=136 00:31:31.079 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.079 issued rwts: total=2460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.079 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.079 00:31:31.079 Run status group 0 (all jobs): 00:31:31.079 READ: bw=79.6MiB/s (83.5MB/s), 17.2MiB/s-31.8MiB/s (18.1MB/s-33.3MB/s), io=800MiB (839MB), run=10033-10049msec 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:31.079 00:31:31.079 real 0m11.155s 00:31:31.079 user 0m45.018s 00:31:31.079 sys 0m1.473s 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.079 14:40:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.079 ************************************ 00:31:31.079 END TEST fio_dif_digest 00:31:31.079 ************************************ 00:31:31.079 14:40:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:31.079 14:40:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:31.079 14:40:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:31.079 rmmod nvme_tcp 00:31:31.079 rmmod nvme_fabrics 00:31:31.079 rmmod nvme_keyring 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3238419 ']' 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3238419 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 3238419 ']' 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 3238419 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3238419 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3238419' 00:31:31.079 killing process with pid 3238419 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@968 -- # kill 3238419 00:31:31.079 14:40:07 nvmf_dif -- common/autotest_common.sh@973 -- # wait 3238419 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:31.079 14:40:07 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:33.029 Waiting for block devices as requested 00:31:33.029 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:33.029 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:33.315 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:33.315 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:33.315 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:33.575 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:33.575 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:33.575 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:33.575 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:33.837 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:33.837 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:34.097 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:34.097 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:34.097 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:34.356 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:34.356 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:34.356 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:34.356 14:40:11 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:34.356 14:40:11 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:34.356 14:40:11 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.356 14:40:11 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:34.356 14:40:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.356 14:40:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:34.356 14:40:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.895 14:40:13 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.895 00:31:36.895 real 1m16.046s 00:31:36.895 user 8m2.132s 00:31:36.895 sys 0m18.863s 00:31:36.895 14:40:13 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:36.895 14:40:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.895 ************************************ 00:31:36.895 END TEST nvmf_dif 00:31:36.895 ************************************ 00:31:36.895 14:40:14 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.895 14:40:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:36.895 14:40:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:36.895 14:40:14 -- common/autotest_common.sh@10 -- # set +x 00:31:36.895 ************************************ 00:31:36.895 START TEST nvmf_abort_qd_sizes 00:31:36.895 ************************************ 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.895 * Looking for test storage... 00:31:36.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.895 14:40:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.896 14:40:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:43.473 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:43.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:43.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:43.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:43.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:43.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:31:43.474 00:31:43.474 --- 10.0.0.2 ping statistics --- 00:31:43.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.474 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:31:43.474 00:31:43.474 --- 10.0.0.1 ping statistics --- 00:31:43.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.474 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:43.474 14:40:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:46.769 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.769 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.769 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:46.770 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3258289 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3258289 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 3258289 ']' 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.030 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:47.030 [2024-06-10 14:40:24.477846] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:31:47.030 [2024-06-10 14:40:24.477910] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.030 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.030 [2024-06-10 14:40:24.564434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:47.291 [2024-06-10 14:40:24.666028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.291 [2024-06-10 14:40:24.666085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.291 [2024-06-10 14:40:24.666093] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.291 [2024-06-10 14:40:24.666100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.291 [2024-06-10 14:40:24.666106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.291 [2024-06-10 14:40:24.666241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.291 [2024-06-10 14:40:24.666342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:47.291 [2024-06-10 14:40:24.666407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.291 [2024-06-10 14:40:24.666442] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:47.862 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.862 ************************************ 00:31:47.862 START TEST spdk_target_abort 00:31:47.862 ************************************ 00:31:47.862 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:31:47.862 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:47.862 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:47.862 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.862 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.432 spdk_targetn1 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.432 [2024-06-10 14:40:25.744249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.432 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.433 [2024-06-10 14:40:25.772479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:48.433 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.433 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.433 [2024-06-10 14:40:25.945835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:624 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:48.433 [2024-06-10 14:40:25.945864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004f p:1 m:0 dnr:0 00:31:48.433 [2024-06-10 14:40:25.984887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2080 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:48.433 [2024-06-10 14:40:25.984905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:51.734 Initializing NVMe Controllers 00:31:51.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:51.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:51.734 Initialization complete. Launching workers. 00:31:51.734 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12806, failed: 2 00:31:51.734 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3454, failed to submit 9354 00:31:51.734 success 721, unsuccess 2733, failed 0 00:31:51.734 14:40:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:51.734 14:40:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:51.734 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.734 [2024-06-10 14:40:29.278490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3136 len:8 PRP1 0x200007c44000 PRP2 0x0 00:31:51.734 [2024-06-10 14:40:29.278534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0093 p:0 m:0 dnr:0 00:31:52.306 [2024-06-10 14:40:29.684611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:12416 len:8 PRP1 0x200007c48000 PRP2 0x0 00:31:52.306 [2024-06-10 14:40:29.684641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:31:54.849 Initializing NVMe Controllers 00:31:54.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:54.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:54.849 Initialization complete. Launching workers. 00:31:54.849 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8591, failed: 2 00:31:54.849 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7358 00:31:54.849 success 375, unsuccess 860, failed 0 00:31:54.849 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:54.849 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:54.849 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.792 [2024-06-10 14:40:33.190775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:162 nsid:1 lba:84784 len:8 PRP1 0x2000078f2000 PRP2 0x0 00:31:55.792 [2024-06-10 14:40:33.190812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:162 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:57.708 [2024-06-10 14:40:34.796190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:265184 len:8 PRP1 0x2000078e2000 PRP2 0x0 00:31:57.708 [2024-06-10 14:40:34.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:0089 p:0 m:0 dnr:0 00:31:58.001 Initializing NVMe Controllers 00:31:58.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.001 Initialization complete. Launching workers. 00:31:58.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42360, failed: 2 00:31:58.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2503, failed to submit 39859 00:31:58.001 success 578, unsuccess 1925, failed 0 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.002 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3258289 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 3258289 ']' 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 3258289 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3258289 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3258289' 00:31:59.947 killing process with pid 3258289 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 3258289 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 3258289 00:31:59.947 00:31:59.947 real 0m12.069s 00:31:59.947 user 0m49.495s 00:31:59.947 sys 0m1.661s 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:59.947 14:40:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.947 ************************************ 00:31:59.947 END TEST spdk_target_abort 00:31:59.947 ************************************ 00:32:00.208 14:40:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:00.208 14:40:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:00.208 14:40:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:00.208 14:40:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.208 ************************************ 00:32:00.208 START TEST kernel_target_abort 00:32:00.208 ************************************ 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.208 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:00.209 14:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:03.513 Waiting for block devices as requested 00:32:03.513 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:03.513 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:03.513 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:03.773 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:03.773 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:03.773 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:04.034 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:04.034 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:04.034 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:04.296 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:04.296 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:04.296 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:04.557 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:04.557 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:04.557 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:04.818 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:04.818 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:04.818 No valid GPT data, bailing 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:04.818 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:05.080 00:32:05.080 Discovery Log Number of Records 2, Generation counter 2 00:32:05.080 =====Discovery Log Entry 0====== 00:32:05.080 trtype: tcp 00:32:05.080 adrfam: ipv4 00:32:05.080 subtype: current discovery subsystem 00:32:05.080 treq: not specified, sq flow control disable supported 00:32:05.080 portid: 1 00:32:05.080 trsvcid: 4420 00:32:05.080 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:05.080 traddr: 10.0.0.1 00:32:05.080 eflags: none 00:32:05.080 sectype: none 00:32:05.080 =====Discovery Log Entry 1====== 00:32:05.080 trtype: tcp 00:32:05.080 adrfam: ipv4 00:32:05.080 subtype: nvme subsystem 00:32:05.080 treq: not specified, sq flow control disable supported 00:32:05.080 portid: 1 00:32:05.080 trsvcid: 4420 00:32:05.080 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:05.080 traddr: 10.0.0.1 00:32:05.080 eflags: none 00:32:05.080 sectype: none 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:05.080 14:40:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:05.080 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.384 Initializing NVMe Controllers 00:32:08.384 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:08.384 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:08.384 Initialization complete. Launching workers. 00:32:08.384 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66329, failed: 0 00:32:08.384 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66329, failed to submit 0 00:32:08.384 success 0, unsuccess 66329, failed 0 00:32:08.384 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:08.384 14:40:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.384 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.682 Initializing NVMe Controllers 00:32:11.682 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:11.682 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:11.682 Initialization complete. Launching workers. 00:32:11.682 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105898, failed: 0 00:32:11.682 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26710, failed to submit 79188 00:32:11.682 success 0, unsuccess 26710, failed 0 00:32:11.682 14:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.682 14:40:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.682 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.225 Initializing NVMe Controllers 00:32:14.225 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.225 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:14.225 Initialization complete. Launching workers. 00:32:14.225 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104348, failed: 0 00:32:14.225 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26102, failed to submit 78246 00:32:14.225 success 0, unsuccess 26102, failed 0 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:14.225 14:40:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:17.524 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:17.524 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:17.785 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:19.698 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:19.698 00:32:19.698 real 0m19.475s 00:32:19.698 user 0m9.498s 00:32:19.698 sys 0m5.594s 00:32:19.698 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:19.698 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:19.698 ************************************ 00:32:19.699 END TEST kernel_target_abort 00:32:19.699 ************************************ 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:19.699 rmmod nvme_tcp 00:32:19.699 rmmod nvme_fabrics 00:32:19.699 rmmod nvme_keyring 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3258289 ']' 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3258289 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 3258289 ']' 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 3258289 00:32:19.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3258289) - No such process 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 3258289 is not found' 00:32:19.699 Process with pid 3258289 is not found 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:19.699 14:40:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:22.994 Waiting for block devices as requested 00:32:22.994 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:22.994 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:22.994 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:23.254 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:23.254 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:23.254 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:23.514 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:23.514 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:23.514 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:23.774 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:23.774 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:23.774 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:23.774 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:24.032 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:24.032 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:24.032 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:24.293 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:24.293 14:41:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.202 14:41:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:26.202 00:32:26.202 real 0m49.727s 00:32:26.202 user 1m3.879s 00:32:26.202 sys 0m17.103s 00:32:26.202 14:41:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:26.202 14:41:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:26.202 ************************************ 00:32:26.202 END TEST nvmf_abort_qd_sizes 00:32:26.202 ************************************ 00:32:26.462 14:41:03 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:26.463 14:41:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:26.463 14:41:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:26.463 14:41:03 -- common/autotest_common.sh@10 -- # set +x 00:32:26.463 ************************************ 00:32:26.463 START TEST keyring_file 00:32:26.463 ************************************ 00:32:26.463 14:41:03 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:26.463 * Looking for test storage... 00:32:26.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.463 14:41:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.463 14:41:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.463 14:41:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.463 14:41:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.463 14:41:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.463 14:41:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.463 14:41:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:26.463 14:41:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:26.463 14:41:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:26.463 14:41:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:26.463 14:41:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SeFaSTGjxv 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:26.463 14:41:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SeFaSTGjxv 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SeFaSTGjxv 00:32:26.463 14:41:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SeFaSTGjxv 00:32:26.463 14:41:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:26.463 14:41:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:26.723 14:41:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:26.723 14:41:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3RSqfFe3zo 00:32:26.723 14:41:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:26.723 14:41:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:26.723 14:41:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:26.723 14:41:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:26.723 14:41:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:26.724 14:41:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:26.724 14:41:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:26.724 14:41:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3RSqfFe3zo 00:32:26.724 14:41:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3RSqfFe3zo 00:32:26.724 14:41:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3RSqfFe3zo 00:32:26.724 14:41:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=3268482 00:32:26.724 14:41:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3268482 00:32:26.724 14:41:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3268482 ']' 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:26.724 14:41:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.724 [2024-06-10 14:41:04.169117] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:32:26.724 [2024-06-10 14:41:04.169171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268482 ] 00:32:26.724 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.724 [2024-06-10 14:41:04.242065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.724 [2024-06-10 14:41:04.308064] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:27.662 14:41:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.662 [2024-06-10 14:41:05.040119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.662 null0 00:32:27.662 [2024-06-10 14:41:05.072162] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:27.662 [2024-06-10 14:41:05.072514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:27.662 [2024-06-10 14:41:05.080176] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:27.662 14:41:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:27.662 14:41:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.662 [2024-06-10 14:41:05.092213] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:27.662 request: 00:32:27.662 { 00:32:27.662 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.662 "secure_channel": false, 00:32:27.662 "listen_address": { 00:32:27.662 "trtype": "tcp", 00:32:27.662 "traddr": "127.0.0.1", 00:32:27.662 "trsvcid": "4420" 00:32:27.662 }, 00:32:27.662 "method": "nvmf_subsystem_add_listener", 00:32:27.662 "req_id": 1 00:32:27.662 } 00:32:27.662 Got JSON-RPC error response 00:32:27.662 response: 00:32:27.662 { 00:32:27.662 "code": -32602, 00:32:27.662 "message": "Invalid parameters" 00:32:27.662 } 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:27.663 14:41:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=3268758 00:32:27.663 14:41:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3268758 /var/tmp/bperf.sock 00:32:27.663 14:41:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3268758 ']' 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:27.663 14:41:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.663 [2024-06-10 14:41:05.119540] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:32:27.663 [2024-06-10 14:41:05.119579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268758 ] 00:32:27.663 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.663 [2024-06-10 14:41:05.169498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.663 [2024-06-10 14:41:05.234045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.922 14:41:05 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:27.922 14:41:05 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:27.922 14:41:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:27.922 14:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:27.922 14:41:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3RSqfFe3zo 00:32:27.922 14:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3RSqfFe3zo 00:32:28.182 14:41:05 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:28.182 14:41:05 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:28.182 14:41:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.182 14:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.182 14:41:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.444 14:41:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.SeFaSTGjxv == \/\t\m\p\/\t\m\p\.\S\e\F\a\S\T\G\j\x\v ]] 00:32:28.444 14:41:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:28.444 14:41:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:28.444 14:41:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.444 14:41:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.444 14:41:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:28.734 14:41:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3RSqfFe3zo == \/\t\m\p\/\t\m\p\.\3\R\S\q\f\F\e\3\z\o ]] 00:32:28.734 14:41:06 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:28.734 14:41:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:28.734 14:41:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.734 14:41:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.734 14:41:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.734 14:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.003 14:41:06 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:29.003 14:41:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.003 14:41:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:29.003 14:41:06 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.003 14:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.263 [2024-06-10 14:41:06.726070] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:29.263 nvme0n1 00:32:29.263 14:41:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:29.263 14:41:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.263 14:41:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.263 14:41:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.263 14:41:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.263 14:41:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.523 14:41:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:29.523 14:41:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:29.523 14:41:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:29.523 14:41:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.523 14:41:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.523 14:41:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.523 14:41:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.783 14:41:07 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:29.783 14:41:07 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.783 Running I/O for 1 seconds... 00:32:31.172 00:32:31.172 Latency(us) 00:32:31.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.172 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:31.172 nvme0n1 : 1.00 14361.71 56.10 0.00 0.00 8886.84 4642.13 16165.55 00:32:31.172 =================================================================================================================== 00:32:31.172 Total : 14361.71 56.10 0.00 0.00 8886.84 4642.13 16165.55 00:32:31.172 0 00:32:31.172 14:41:08 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:31.172 14:41:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.172 14:41:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.433 14:41:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:31.433 14:41:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:31.433 14:41:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.433 14:41:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.433 14:41:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.433 14:41:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.433 14:41:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.433 14:41:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:31.433 14:41:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:31.433 14:41:09 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.433 14:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.693 [2024-06-10 14:41:09.215299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:31.693 [2024-06-10 14:41:09.216260] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14e60 (107): Transport endpoint is not connected 00:32:31.693 [2024-06-10 14:41:09.217254] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa14e60 (9): Bad file descriptor 00:32:31.693 [2024-06-10 14:41:09.218254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:31.693 [2024-06-10 14:41:09.218264] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:31.693 [2024-06-10 14:41:09.218271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:31.693 request: 00:32:31.693 { 00:32:31.693 "name": "nvme0", 00:32:31.693 "trtype": "tcp", 00:32:31.693 "traddr": "127.0.0.1", 00:32:31.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.693 "adrfam": "ipv4", 00:32:31.693 "trsvcid": "4420", 00:32:31.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.693 "psk": "key1", 00:32:31.693 "method": "bdev_nvme_attach_controller", 00:32:31.693 "req_id": 1 00:32:31.693 } 00:32:31.693 Got JSON-RPC error response 00:32:31.693 response: 00:32:31.693 { 00:32:31.693 "code": -5, 00:32:31.693 "message": "Input/output error" 00:32:31.693 } 00:32:31.694 14:41:09 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:31.694 14:41:09 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:31.694 14:41:09 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:31.694 14:41:09 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:31.694 14:41:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:31.694 14:41:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.694 14:41:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.694 14:41:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.694 14:41:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.694 14:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.953 14:41:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:31.953 14:41:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:31.953 14:41:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.953 14:41:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.953 14:41:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.953 14:41:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.953 14:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.213 14:41:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:32.213 14:41:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:32.213 14:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:32.474 14:41:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:32.474 14:41:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:32.734 14:41:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:32.734 14:41:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:32.734 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.734 14:41:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:32.734 14:41:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.SeFaSTGjxv 00:32:32.734 14:41:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:32.734 14:41:10 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:32.734 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:32.994 [2024-06-10 14:41:10.444691] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SeFaSTGjxv': 0100660 00:32:32.994 [2024-06-10 14:41:10.444718] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:32.994 request: 00:32:32.994 { 00:32:32.994 "name": "key0", 00:32:32.994 "path": "/tmp/tmp.SeFaSTGjxv", 00:32:32.994 "method": "keyring_file_add_key", 00:32:32.994 "req_id": 1 00:32:32.994 } 00:32:32.994 Got JSON-RPC error response 00:32:32.994 response: 00:32:32.994 { 00:32:32.994 "code": -1, 00:32:32.994 "message": "Operation not permitted" 00:32:32.994 } 00:32:32.994 14:41:10 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:32.994 14:41:10 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:32.994 14:41:10 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:32.994 14:41:10 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:32.994 14:41:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.SeFaSTGjxv 00:32:32.994 14:41:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:32.994 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SeFaSTGjxv 00:32:33.254 14:41:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.SeFaSTGjxv 00:32:33.254 14:41:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.254 14:41:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:33.254 14:41:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:33.254 14:41:10 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.254 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.514 [2024-06-10 14:41:10.962015] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SeFaSTGjxv': No such file or directory 00:32:33.514 [2024-06-10 14:41:10.962031] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:33.514 [2024-06-10 14:41:10.962060] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:33.514 [2024-06-10 14:41:10.962067] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:33.514 [2024-06-10 14:41:10.962074] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:33.514 request: 00:32:33.514 { 00:32:33.514 "name": "nvme0", 00:32:33.514 "trtype": "tcp", 00:32:33.514 "traddr": "127.0.0.1", 00:32:33.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.514 "adrfam": "ipv4", 00:32:33.514 "trsvcid": "4420", 00:32:33.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.514 "psk": "key0", 00:32:33.514 "method": "bdev_nvme_attach_controller", 00:32:33.514 "req_id": 1 00:32:33.514 } 00:32:33.514 Got JSON-RPC error response 00:32:33.514 response: 00:32:33.514 { 00:32:33.514 "code": -19, 00:32:33.514 "message": "No such device" 00:32:33.514 } 00:32:33.514 14:41:10 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:33.514 14:41:10 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:33.514 14:41:10 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:33.514 14:41:10 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:33.514 14:41:10 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:33.514 14:41:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:33.774 14:41:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v4nNpLb3Yd 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:33.774 14:41:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v4nNpLb3Yd 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v4nNpLb3Yd 00:32:33.774 14:41:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.v4nNpLb3Yd 00:32:33.774 14:41:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4nNpLb3Yd 00:32:33.774 14:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v4nNpLb3Yd 00:32:34.033 14:41:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:34.033 14:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:34.293 nvme0n1 00:32:34.293 14:41:11 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:34.293 14:41:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.293 14:41:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.293 14:41:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.293 14:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.293 14:41:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.552 14:41:11 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:34.552 14:41:11 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:34.552 14:41:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:34.552 14:41:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:34.552 14:41:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:34.552 14:41:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.552 14:41:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.552 14:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.811 14:41:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:34.811 14:41:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:34.811 14:41:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.811 14:41:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.811 14:41:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.811 14:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.811 14:41:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.070 14:41:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:35.070 14:41:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:35.070 14:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:35.329 14:41:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:35.329 14:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.329 14:41:12 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:35.329 14:41:12 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:35.329 14:41:12 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4nNpLb3Yd 00:32:35.329 14:41:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v4nNpLb3Yd 00:32:35.589 14:41:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3RSqfFe3zo 00:32:35.589 14:41:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3RSqfFe3zo 00:32:35.850 14:41:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.850 14:41:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.110 nvme0n1 00:32:36.110 14:41:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:36.110 14:41:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:36.371 14:41:13 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:36.371 "subsystems": [ 00:32:36.371 { 00:32:36.371 "subsystem": "keyring", 00:32:36.371 "config": [ 00:32:36.371 { 00:32:36.371 "method": "keyring_file_add_key", 00:32:36.371 "params": { 00:32:36.371 "name": "key0", 00:32:36.371 "path": "/tmp/tmp.v4nNpLb3Yd" 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "keyring_file_add_key", 00:32:36.371 "params": { 00:32:36.371 "name": "key1", 00:32:36.371 "path": "/tmp/tmp.3RSqfFe3zo" 00:32:36.371 } 00:32:36.371 } 00:32:36.371 ] 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "subsystem": "iobuf", 00:32:36.371 "config": [ 00:32:36.371 { 00:32:36.371 "method": "iobuf_set_options", 00:32:36.371 "params": { 00:32:36.371 "small_pool_count": 8192, 00:32:36.371 "large_pool_count": 1024, 00:32:36.371 "small_bufsize": 8192, 00:32:36.371 "large_bufsize": 135168 00:32:36.371 } 00:32:36.371 } 00:32:36.371 ] 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "subsystem": "sock", 00:32:36.371 "config": [ 00:32:36.371 { 00:32:36.371 "method": "sock_set_default_impl", 00:32:36.371 "params": { 00:32:36.371 "impl_name": "posix" 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "sock_impl_set_options", 00:32:36.371 "params": { 00:32:36.371 "impl_name": "ssl", 00:32:36.371 "recv_buf_size": 4096, 00:32:36.371 "send_buf_size": 4096, 00:32:36.371 "enable_recv_pipe": true, 00:32:36.371 "enable_quickack": false, 00:32:36.371 "enable_placement_id": 0, 00:32:36.371 "enable_zerocopy_send_server": true, 00:32:36.371 "enable_zerocopy_send_client": false, 00:32:36.371 "zerocopy_threshold": 0, 00:32:36.371 "tls_version": 0, 00:32:36.371 "enable_ktls": false 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "sock_impl_set_options", 00:32:36.371 "params": { 00:32:36.371 "impl_name": "posix", 00:32:36.371 "recv_buf_size": 2097152, 00:32:36.371 "send_buf_size": 2097152, 00:32:36.371 "enable_recv_pipe": true, 00:32:36.371 "enable_quickack": false, 00:32:36.371 "enable_placement_id": 0, 00:32:36.371 "enable_zerocopy_send_server": true, 00:32:36.371 "enable_zerocopy_send_client": false, 00:32:36.371 "zerocopy_threshold": 0, 00:32:36.371 "tls_version": 0, 00:32:36.371 "enable_ktls": false 00:32:36.371 } 00:32:36.371 } 00:32:36.371 ] 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "subsystem": "vmd", 00:32:36.371 "config": [] 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "subsystem": "accel", 00:32:36.371 "config": [ 00:32:36.371 { 00:32:36.371 "method": "accel_set_options", 00:32:36.371 "params": { 00:32:36.371 "small_cache_size": 128, 00:32:36.371 "large_cache_size": 16, 00:32:36.371 "task_count": 2048, 00:32:36.371 "sequence_count": 2048, 00:32:36.371 "buf_count": 2048 00:32:36.371 } 00:32:36.371 } 00:32:36.371 ] 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "subsystem": "bdev", 00:32:36.371 "config": [ 00:32:36.371 { 00:32:36.371 "method": "bdev_set_options", 00:32:36.371 "params": { 00:32:36.371 "bdev_io_pool_size": 65535, 00:32:36.371 "bdev_io_cache_size": 256, 00:32:36.371 "bdev_auto_examine": true, 00:32:36.371 "iobuf_small_cache_size": 128, 00:32:36.371 "iobuf_large_cache_size": 16 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "bdev_raid_set_options", 00:32:36.371 "params": { 00:32:36.371 "process_window_size_kb": 1024 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "bdev_iscsi_set_options", 00:32:36.371 "params": { 00:32:36.371 "timeout_sec": 30 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "bdev_nvme_set_options", 00:32:36.371 "params": { 00:32:36.371 "action_on_timeout": "none", 00:32:36.371 "timeout_us": 0, 00:32:36.371 "timeout_admin_us": 0, 00:32:36.371 "keep_alive_timeout_ms": 10000, 00:32:36.371 "arbitration_burst": 0, 00:32:36.371 "low_priority_weight": 0, 00:32:36.371 "medium_priority_weight": 0, 00:32:36.371 "high_priority_weight": 0, 00:32:36.371 "nvme_adminq_poll_period_us": 10000, 00:32:36.371 "nvme_ioq_poll_period_us": 0, 00:32:36.371 "io_queue_requests": 512, 00:32:36.371 "delay_cmd_submit": true, 00:32:36.371 "transport_retry_count": 4, 00:32:36.371 "bdev_retry_count": 3, 00:32:36.371 "transport_ack_timeout": 0, 00:32:36.371 "ctrlr_loss_timeout_sec": 0, 00:32:36.371 "reconnect_delay_sec": 0, 00:32:36.371 "fast_io_fail_timeout_sec": 0, 00:32:36.371 "disable_auto_failback": false, 00:32:36.371 "generate_uuids": false, 00:32:36.371 "transport_tos": 0, 00:32:36.371 "nvme_error_stat": false, 00:32:36.371 "rdma_srq_size": 0, 00:32:36.371 "io_path_stat": false, 00:32:36.371 "allow_accel_sequence": false, 00:32:36.371 "rdma_max_cq_size": 0, 00:32:36.371 "rdma_cm_event_timeout_ms": 0, 00:32:36.371 "dhchap_digests": [ 00:32:36.371 "sha256", 00:32:36.371 "sha384", 00:32:36.371 "sha512" 00:32:36.371 ], 00:32:36.371 "dhchap_dhgroups": [ 00:32:36.371 "null", 00:32:36.371 "ffdhe2048", 00:32:36.371 "ffdhe3072", 00:32:36.371 "ffdhe4096", 00:32:36.371 "ffdhe6144", 00:32:36.371 "ffdhe8192" 00:32:36.371 ] 00:32:36.371 } 00:32:36.371 }, 00:32:36.371 { 00:32:36.371 "method": "bdev_nvme_attach_controller", 00:32:36.371 "params": { 00:32:36.371 "name": "nvme0", 00:32:36.371 "trtype": "TCP", 00:32:36.371 "adrfam": "IPv4", 00:32:36.371 "traddr": "127.0.0.1", 00:32:36.371 "trsvcid": "4420", 00:32:36.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.371 "prchk_reftag": false, 00:32:36.371 "prchk_guard": false, 00:32:36.371 "ctrlr_loss_timeout_sec": 0, 00:32:36.371 "reconnect_delay_sec": 0, 00:32:36.371 "fast_io_fail_timeout_sec": 0, 00:32:36.371 "psk": "key0", 00:32:36.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:36.371 "hdgst": false, 00:32:36.372 "ddgst": false 00:32:36.372 } 00:32:36.372 }, 00:32:36.372 { 00:32:36.372 "method": "bdev_nvme_set_hotplug", 00:32:36.372 "params": { 00:32:36.372 "period_us": 100000, 00:32:36.372 "enable": false 00:32:36.372 } 00:32:36.372 }, 00:32:36.372 { 00:32:36.372 "method": "bdev_wait_for_examine" 00:32:36.372 } 00:32:36.372 ] 00:32:36.372 }, 00:32:36.372 { 00:32:36.372 "subsystem": "nbd", 00:32:36.372 "config": [] 00:32:36.372 } 00:32:36.372 ] 00:32:36.372 }' 00:32:36.372 14:41:13 keyring_file -- keyring/file.sh@114 -- # killprocess 3268758 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3268758 ']' 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3268758 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3268758 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3268758' 00:32:36.372 killing process with pid 3268758 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@968 -- # kill 3268758 00:32:36.372 Received shutdown signal, test time was about 1.000000 seconds 00:32:36.372 00:32:36.372 Latency(us) 00:32:36.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.372 =================================================================================================================== 00:32:36.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.372 14:41:13 keyring_file -- common/autotest_common.sh@973 -- # wait 3268758 00:32:36.633 14:41:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=3270571 00:32:36.633 14:41:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3270571 /var/tmp/bperf.sock 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 3270571 ']' 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:36.633 14:41:14 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:36.633 14:41:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:36.633 14:41:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:36.633 "subsystems": [ 00:32:36.633 { 00:32:36.633 "subsystem": "keyring", 00:32:36.633 "config": [ 00:32:36.633 { 00:32:36.633 "method": "keyring_file_add_key", 00:32:36.633 "params": { 00:32:36.633 "name": "key0", 00:32:36.633 "path": "/tmp/tmp.v4nNpLb3Yd" 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "keyring_file_add_key", 00:32:36.633 "params": { 00:32:36.633 "name": "key1", 00:32:36.633 "path": "/tmp/tmp.3RSqfFe3zo" 00:32:36.633 } 00:32:36.633 } 00:32:36.633 ] 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "subsystem": "iobuf", 00:32:36.633 "config": [ 00:32:36.633 { 00:32:36.633 "method": "iobuf_set_options", 00:32:36.633 "params": { 00:32:36.633 "small_pool_count": 8192, 00:32:36.633 "large_pool_count": 1024, 00:32:36.633 "small_bufsize": 8192, 00:32:36.633 "large_bufsize": 135168 00:32:36.633 } 00:32:36.633 } 00:32:36.633 ] 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "subsystem": "sock", 00:32:36.633 "config": [ 00:32:36.633 { 00:32:36.633 "method": "sock_set_default_impl", 00:32:36.633 "params": { 00:32:36.633 "impl_name": "posix" 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "sock_impl_set_options", 00:32:36.633 "params": { 00:32:36.633 "impl_name": "ssl", 00:32:36.633 "recv_buf_size": 4096, 00:32:36.633 "send_buf_size": 4096, 00:32:36.633 "enable_recv_pipe": true, 00:32:36.633 "enable_quickack": false, 00:32:36.633 "enable_placement_id": 0, 00:32:36.633 "enable_zerocopy_send_server": true, 00:32:36.633 "enable_zerocopy_send_client": false, 00:32:36.633 "zerocopy_threshold": 0, 00:32:36.633 "tls_version": 0, 00:32:36.633 "enable_ktls": false 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "sock_impl_set_options", 00:32:36.633 "params": { 00:32:36.633 "impl_name": "posix", 00:32:36.633 "recv_buf_size": 2097152, 00:32:36.633 "send_buf_size": 2097152, 00:32:36.633 "enable_recv_pipe": true, 00:32:36.633 "enable_quickack": false, 00:32:36.633 "enable_placement_id": 0, 00:32:36.633 "enable_zerocopy_send_server": true, 00:32:36.633 "enable_zerocopy_send_client": false, 00:32:36.633 "zerocopy_threshold": 0, 00:32:36.633 "tls_version": 0, 00:32:36.633 "enable_ktls": false 00:32:36.633 } 00:32:36.633 } 00:32:36.633 ] 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "subsystem": "vmd", 00:32:36.633 "config": [] 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "subsystem": "accel", 00:32:36.633 "config": [ 00:32:36.633 { 00:32:36.633 "method": "accel_set_options", 00:32:36.633 "params": { 00:32:36.633 "small_cache_size": 128, 00:32:36.633 "large_cache_size": 16, 00:32:36.633 "task_count": 2048, 00:32:36.633 "sequence_count": 2048, 00:32:36.633 "buf_count": 2048 00:32:36.633 } 00:32:36.633 } 00:32:36.633 ] 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "subsystem": "bdev", 00:32:36.633 "config": [ 00:32:36.633 { 00:32:36.633 "method": "bdev_set_options", 00:32:36.633 "params": { 00:32:36.633 "bdev_io_pool_size": 65535, 00:32:36.633 "bdev_io_cache_size": 256, 00:32:36.633 "bdev_auto_examine": true, 00:32:36.633 "iobuf_small_cache_size": 128, 00:32:36.633 "iobuf_large_cache_size": 16 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "bdev_raid_set_options", 00:32:36.633 "params": { 00:32:36.633 "process_window_size_kb": 1024 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "bdev_iscsi_set_options", 00:32:36.633 "params": { 00:32:36.633 "timeout_sec": 30 00:32:36.633 } 00:32:36.633 }, 00:32:36.633 { 00:32:36.633 "method": "bdev_nvme_set_options", 00:32:36.633 "params": { 00:32:36.633 "action_on_timeout": "none", 00:32:36.633 "timeout_us": 0, 00:32:36.633 "timeout_admin_us": 0, 00:32:36.633 "keep_alive_timeout_ms": 10000, 00:32:36.633 "arbitration_burst": 0, 00:32:36.633 "low_priority_weight": 0, 00:32:36.633 "medium_priority_weight": 0, 00:32:36.633 "high_priority_weight": 0, 00:32:36.633 "nvme_adminq_poll_period_us": 10000, 00:32:36.633 "nvme_ioq_poll_period_us": 0, 00:32:36.633 "io_queue_requests": 512, 00:32:36.633 "delay_cmd_submit": true, 00:32:36.633 "transport_retry_count": 4, 00:32:36.633 "bdev_retry_count": 3, 00:32:36.633 "transport_ack_timeout": 0, 00:32:36.633 "ctrlr_loss_timeout_sec": 0, 00:32:36.633 "reconnect_delay_sec": 0, 00:32:36.633 "fast_io_fail_timeout_sec": 0, 00:32:36.633 "disable_auto_failback": false, 00:32:36.633 "generate_uuids": false, 00:32:36.633 "transport_tos": 0, 00:32:36.633 "nvme_error_stat": false, 00:32:36.633 "rdma_srq_size": 0, 00:32:36.633 "io_path_stat": false, 00:32:36.633 "allow_accel_sequence": false, 00:32:36.634 "rdma_max_cq_size": 0, 00:32:36.634 "rdma_cm_event_timeout_ms": 0, 00:32:36.634 "dhchap_digests": [ 00:32:36.634 "sha256", 00:32:36.634 "sha384", 00:32:36.634 "sha512" 00:32:36.634 ], 00:32:36.634 "dhchap_dhgroups": [ 00:32:36.634 "null", 00:32:36.634 "ffdhe2048", 00:32:36.634 "ffdhe3072", 00:32:36.634 "ffdhe4096", 00:32:36.634 "ffdhe6144", 00:32:36.634 "ffdhe8192" 00:32:36.634 ] 00:32:36.634 } 00:32:36.634 }, 00:32:36.634 { 00:32:36.634 "method": "bdev_nvme_attach_controller", 00:32:36.634 "params": { 00:32:36.634 "name": "nvme0", 00:32:36.634 "trtype": "TCP", 00:32:36.634 "adrfam": "IPv4", 00:32:36.634 "traddr": "127.0.0.1", 00:32:36.634 "trsvcid": "4420", 00:32:36.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.634 "prchk_reftag": false, 00:32:36.634 "prchk_guard": false, 00:32:36.634 "ctrlr_loss_timeout_sec": 0, 00:32:36.634 "reconnect_delay_sec": 0, 00:32:36.634 "fast_io_fail_timeout_sec": 0, 00:32:36.634 "psk": "key0", 00:32:36.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:36.634 "hdgst": false, 00:32:36.634 "ddgst": false 00:32:36.634 } 00:32:36.634 }, 00:32:36.634 { 00:32:36.634 "method": "bdev_nvme_set_hotplug", 00:32:36.634 "params": { 00:32:36.634 "period_us": 100000, 00:32:36.634 "enable": false 00:32:36.634 } 00:32:36.634 }, 00:32:36.634 { 00:32:36.634 "method": "bdev_wait_for_examine" 00:32:36.634 } 00:32:36.634 ] 00:32:36.634 }, 00:32:36.634 { 00:32:36.634 "subsystem": "nbd", 00:32:36.634 "config": [] 00:32:36.634 } 00:32:36.634 ] 00:32:36.634 }' 00:32:36.634 [2024-06-10 14:41:14.093620] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:32:36.634 [2024-06-10 14:41:14.093674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270571 ] 00:32:36.634 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.634 [2024-06-10 14:41:14.151790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.634 [2024-06-10 14:41:14.214738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.894 [2024-06-10 14:41:14.361277] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:37.465 14:41:14 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:37.465 14:41:14 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:37.465 14:41:14 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:37.465 14:41:14 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:37.465 14:41:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.725 14:41:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:37.725 14:41:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.725 14:41:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:37.725 14:41:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:37.725 14:41:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.984 14:41:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:37.984 14:41:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:37.984 14:41:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:37.984 14:41:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:38.243 14:41:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:38.243 14:41:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:38.243 14:41:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.v4nNpLb3Yd /tmp/tmp.3RSqfFe3zo 00:32:38.243 14:41:15 keyring_file -- keyring/file.sh@20 -- # killprocess 3270571 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3270571 ']' 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3270571 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3270571 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3270571' 00:32:38.243 killing process with pid 3270571 00:32:38.243 14:41:15 keyring_file -- common/autotest_common.sh@968 -- # kill 3270571 00:32:38.243 Received shutdown signal, test time was about 1.000000 seconds 00:32:38.243 00:32:38.243 Latency(us) 00:32:38.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.244 =================================================================================================================== 00:32:38.244 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:38.244 14:41:15 keyring_file -- common/autotest_common.sh@973 -- # wait 3270571 00:32:38.503 14:41:15 keyring_file -- keyring/file.sh@21 -- # killprocess 3268482 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 3268482 ']' 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@953 -- # kill -0 3268482 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3268482 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3268482' 00:32:38.503 killing process with pid 3268482 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@968 -- # kill 3268482 00:32:38.503 [2024-06-10 14:41:15.972293] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:38.503 14:41:15 keyring_file -- common/autotest_common.sh@973 -- # wait 3268482 00:32:38.763 00:32:38.763 real 0m12.324s 00:32:38.763 user 0m30.488s 00:32:38.763 sys 0m2.659s 00:32:38.763 14:41:16 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:38.763 14:41:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:38.763 ************************************ 00:32:38.763 END TEST keyring_file 00:32:38.763 ************************************ 00:32:38.763 14:41:16 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:38.763 14:41:16 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:38.763 14:41:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:38.763 14:41:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:38.763 14:41:16 -- common/autotest_common.sh@10 -- # set +x 00:32:38.763 ************************************ 00:32:38.763 START TEST keyring_linux 00:32:38.763 ************************************ 00:32:38.763 14:41:16 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:38.763 * Looking for test storage... 00:32:39.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:39.024 14:41:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:39.024 14:41:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.024 14:41:16 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.024 14:41:16 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.024 14:41:16 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.024 14:41:16 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.024 14:41:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.024 14:41:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.024 14:41:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.024 14:41:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:39.025 14:41:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:39.025 /tmp/:spdk-test:key0 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:39.025 14:41:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:39.025 14:41:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:39.025 /tmp/:spdk-test:key1 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3271003 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3271003 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 3271003 ']' 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:39.025 14:41:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:39.025 14:41:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:39.025 [2024-06-10 14:41:16.530252] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:32:39.025 [2024-06-10 14:41:16.530341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271003 ] 00:32:39.025 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.025 [2024-06-10 14:41:16.610102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.285 [2024-06-10 14:41:16.682821] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.854 14:41:17 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:39.854 14:41:17 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:39.854 14:41:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:39.854 14:41:17 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.854 14:41:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:39.854 [2024-06-10 14:41:17.392114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.854 null0 00:32:39.854 [2024-06-10 14:41:17.424161] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:39.854 [2024-06-10 14:41:17.424530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:39.854 14:41:17 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:39.854 14:41:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:39.854 191990282 00:32:39.854 14:41:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:40.112 471890539 00:32:40.112 14:41:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3271335 00:32:40.112 14:41:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3271335 /var/tmp/bperf.sock 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 3271335 ']' 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:40.112 14:41:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:40.112 [2024-06-10 14:41:17.504191] Starting SPDK v24.09-pre git sha1 28a75b1f3 / DPDK 24.03.0 initialization... 00:32:40.112 [2024-06-10 14:41:17.504290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271335 ] 00:32:40.112 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.112 [2024-06-10 14:41:17.564603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.112 [2024-06-10 14:41:17.629106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.112 14:41:17 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:40.113 14:41:17 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:40.113 14:41:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:40.113 14:41:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:40.371 14:41:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:40.371 14:41:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:40.631 14:41:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:40.631 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:40.890 [2024-06-10 14:41:18.307244] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:40.890 nvme0n1 00:32:40.890 14:41:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:40.890 14:41:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:40.890 14:41:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:40.890 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:40.890 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:40.890 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.149 14:41:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:41.149 14:41:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:41.149 14:41:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:41.149 14:41:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:41.149 14:41:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.149 14:41:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:41.149 14:41:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@25 -- # sn=191990282 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 191990282 == \1\9\1\9\9\0\2\8\2 ]] 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 191990282 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:41.409 14:41:18 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.409 Running I/O for 1 seconds... 00:32:42.349 00:32:42.349 Latency(us) 00:32:42.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.349 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:42.349 nvme0n1 : 1.01 16063.49 62.75 0.00 0.00 7930.26 6553.60 16165.55 00:32:42.349 =================================================================================================================== 00:32:42.349 Total : 16063.49 62.75 0.00 0.00 7930.26 6553.60 16165.55 00:32:42.349 0 00:32:42.608 14:41:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:42.608 14:41:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:42.608 14:41:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:42.608 14:41:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:42.608 14:41:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:42.608 14:41:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:42.608 14:41:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:42.608 14:41:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.868 14:41:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:42.868 14:41:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:42.868 14:41:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:42.868 14:41:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:42.868 14:41:20 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:42.868 14:41:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:43.128 [2024-06-10 14:41:20.583915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:43.128 [2024-06-10 14:41:20.584454] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78e30 (107): Transport endpoint is not connected 00:32:43.128 [2024-06-10 14:41:20.585449] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb78e30 (9): Bad file descriptor 00:32:43.128 [2024-06-10 14:41:20.586450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:43.128 [2024-06-10 14:41:20.586461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:43.128 [2024-06-10 14:41:20.586467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:43.128 request: 00:32:43.128 { 00:32:43.128 "name": "nvme0", 00:32:43.128 "trtype": "tcp", 00:32:43.128 "traddr": "127.0.0.1", 00:32:43.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:43.128 "adrfam": "ipv4", 00:32:43.128 "trsvcid": "4420", 00:32:43.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.128 "psk": ":spdk-test:key1", 00:32:43.128 "method": "bdev_nvme_attach_controller", 00:32:43.128 "req_id": 1 00:32:43.128 } 00:32:43.128 Got JSON-RPC error response 00:32:43.128 response: 00:32:43.128 { 00:32:43.128 "code": -5, 00:32:43.128 "message": "Input/output error" 00:32:43.128 } 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@33 -- # sn=191990282 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 191990282 00:32:43.128 1 links removed 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@33 -- # sn=471890539 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 471890539 00:32:43.128 1 links removed 00:32:43.128 14:41:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3271335 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 3271335 ']' 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 3271335 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3271335 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3271335' 00:32:43.128 killing process with pid 3271335 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@968 -- # kill 3271335 00:32:43.128 Received shutdown signal, test time was about 1.000000 seconds 00:32:43.128 00:32:43.128 Latency(us) 00:32:43.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.128 =================================================================================================================== 00:32:43.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:43.128 14:41:20 keyring_linux -- common/autotest_common.sh@973 -- # wait 3271335 00:32:43.387 14:41:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3271003 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 3271003 ']' 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 3271003 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3271003 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3271003' 00:32:43.387 killing process with pid 3271003 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@968 -- # kill 3271003 00:32:43.387 14:41:20 keyring_linux -- common/autotest_common.sh@973 -- # wait 3271003 00:32:43.648 00:32:43.648 real 0m4.806s 00:32:43.648 user 0m9.037s 00:32:43.648 sys 0m1.400s 00:32:43.648 14:41:21 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:43.648 14:41:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:43.648 ************************************ 00:32:43.648 END TEST keyring_linux 00:32:43.648 ************************************ 00:32:43.648 14:41:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:43.648 14:41:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:43.648 14:41:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:43.648 14:41:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:43.648 14:41:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:43.648 14:41:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:43.648 14:41:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:43.648 14:41:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:43.648 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:32:43.648 14:41:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:43.648 14:41:21 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:32:43.648 14:41:21 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:32:43.648 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:32:51.778 INFO: APP EXITING 00:32:51.778 INFO: killing all VMs 00:32:51.778 INFO: killing vhost app 00:32:51.778 INFO: EXIT DONE 00:32:54.366 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:54.366 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:54.627 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:54.627 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:57.926 Cleaning 00:32:57.926 Removing: /var/run/dpdk/spdk0/config 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:57.926 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:57.926 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:57.926 Removing: /var/run/dpdk/spdk1/config 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:57.926 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:57.926 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:57.926 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:57.926 Removing: /var/run/dpdk/spdk2/config 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:57.926 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:57.926 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:57.926 Removing: /var/run/dpdk/spdk3/config 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:57.926 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:57.926 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:57.926 Removing: /var/run/dpdk/spdk4/config 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:57.926 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:57.927 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:57.927 Removing: /dev/shm/bdev_svc_trace.1 00:32:57.927 Removing: /dev/shm/nvmf_trace.0 00:32:57.927 Removing: /dev/shm/spdk_tgt_trace.pid2811061 00:32:57.927 Removing: /var/run/dpdk/spdk0 00:32:57.927 Removing: /var/run/dpdk/spdk1 00:32:57.927 Removing: /var/run/dpdk/spdk2 00:32:57.927 Removing: /var/run/dpdk/spdk3 00:32:57.927 Removing: /var/run/dpdk/spdk4 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2809578 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2811061 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2811900 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2812950 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2813290 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2814353 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2814640 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2814814 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2815949 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2816727 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2817104 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2817428 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2817802 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2818051 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2818332 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2818680 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2819068 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2820124 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2823714 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2824080 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2824448 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2824706 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2825156 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2825328 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2825904 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2825985 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2826343 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2826657 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2826726 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2827053 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2827496 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2828082 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2828683 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2828868 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2829078 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2829158 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2829508 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2829808 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2830002 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2830244 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2830602 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2830949 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2831298 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2831505 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2831709 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2832040 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2832397 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2832744 00:32:57.927 Removing: /var/run/dpdk/spdk_pid2833014 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2833216 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2833483 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2833832 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2834191 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2834543 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2834787 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2835002 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2835314 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2835723 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2840179 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2893956 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2898980 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2911030 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2917395 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2922083 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2922856 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2937421 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2937439 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2938658 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2939666 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2940672 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2941344 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2941346 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2941685 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2941820 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2941947 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2943019 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2944025 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2945032 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2945709 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2945711 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2946045 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2947488 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2948877 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2958880 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2959234 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2964288 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2971336 00:32:58.187 Removing: /var/run/dpdk/spdk_pid2974739 00:32:58.188 Removing: /var/run/dpdk/spdk_pid2987808 00:32:58.188 Removing: /var/run/dpdk/spdk_pid2998434 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3000471 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3001486 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3022088 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3026765 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3060666 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3066221 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3068127 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3070238 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3070251 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3070422 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3070597 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3070985 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3073136 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3074056 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3074444 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3077718 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3078416 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3079130 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3084175 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3096050 00:32:58.188 Removing: /var/run/dpdk/spdk_pid3100897 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3107773 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3109274 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3110790 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3115863 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3120889 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3129808 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3129925 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3135567 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3135739 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3135936 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3136578 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3136583 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3141991 00:32:58.448 Removing: /var/run/dpdk/spdk_pid3142777 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3147893 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3150991 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3157343 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3163882 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3174430 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3182750 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3182752 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3205175 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3205773 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3206451 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3207127 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3208180 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3208819 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3209444 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3209999 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3214948 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3215272 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3222413 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3222674 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3225282 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3232608 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3232630 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3238594 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3241254 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3243731 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3244952 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3247472 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3248829 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3258600 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3259265 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3259934 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3262691 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3263222 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3263890 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3268482 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3268758 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3270571 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3271003 00:32:58.449 Removing: /var/run/dpdk/spdk_pid3271335 00:32:58.449 Clean 00:32:58.709 14:41:36 -- common/autotest_common.sh@1450 -- # return 0 00:32:58.709 14:41:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:58.709 14:41:36 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:58.709 14:41:36 -- common/autotest_common.sh@10 -- # set +x 00:32:58.709 14:41:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:58.709 14:41:36 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:58.709 14:41:36 -- common/autotest_common.sh@10 -- # set +x 00:32:58.709 14:41:36 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:58.709 14:41:36 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:58.709 14:41:36 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:58.709 14:41:36 -- spdk/autotest.sh@391 -- # hash lcov 00:32:58.709 14:41:36 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:58.709 14:41:36 -- spdk/autotest.sh@393 -- # hostname 00:32:58.709 14:41:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:58.969 geninfo: WARNING: invalid characters removed from testname! 00:33:25.537 14:42:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.475 14:42:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:29.009 14:42:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.914 14:42:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.455 14:42:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:36.000 14:42:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.914 14:42:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:37.914 14:42:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.914 14:42:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:37.914 14:42:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.914 14:42:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.914 14:42:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.914 14:42:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.914 14:42:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.914 14:42:15 -- paths/export.sh@5 -- $ export PATH 00:33:37.914 14:42:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.914 14:42:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:37.914 14:42:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:37.914 14:42:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718023335.XXXXXX 00:33:37.914 14:42:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718023335.elmmJK 00:33:37.914 14:42:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:37.914 14:42:15 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:37.914 14:42:15 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:37.914 14:42:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:37.914 14:42:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:37.914 14:42:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:37.914 14:42:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:37.914 14:42:15 -- common/autotest_common.sh@10 -- $ set +x 00:33:37.914 14:42:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:37.914 14:42:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:37.914 14:42:15 -- pm/common@17 -- $ local monitor 00:33:37.914 14:42:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:37.914 14:42:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:37.914 14:42:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:37.914 14:42:15 -- pm/common@21 -- $ date +%s 00:33:37.914 14:42:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:37.914 14:42:15 -- pm/common@25 -- $ sleep 1 00:33:37.914 14:42:15 -- pm/common@21 -- $ date +%s 00:33:37.914 14:42:15 -- pm/common@21 -- $ date +%s 00:33:37.914 14:42:15 -- pm/common@21 -- $ date +%s 00:33:37.914 14:42:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718023335 00:33:37.914 14:42:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718023335 00:33:37.914 14:42:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718023335 00:33:37.914 14:42:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718023335 00:33:37.914 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718023335_collect-vmstat.pm.log 00:33:38.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718023335_collect-cpu-load.pm.log 00:33:38.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718023335_collect-cpu-temp.pm.log 00:33:38.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718023335_collect-bmc-pm.bmc.pm.log 00:33:39.119 14:42:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:39.119 14:42:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:39.119 14:42:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:39.119 14:42:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:39.119 14:42:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:39.119 14:42:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:39.119 14:42:16 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:39.119 14:42:16 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:39.119 14:42:16 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:39.119 14:42:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:39.119 14:42:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:39.119 14:42:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:39.119 14:42:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:39.119 14:42:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.119 14:42:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:39.119 14:42:16 -- pm/common@44 -- $ pid=3283961 00:33:39.119 14:42:16 -- pm/common@50 -- $ kill -TERM 3283961 00:33:39.119 14:42:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.119 14:42:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:39.119 14:42:16 -- pm/common@44 -- $ pid=3283962 00:33:39.119 14:42:16 -- pm/common@50 -- $ kill -TERM 3283962 00:33:39.119 14:42:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.119 14:42:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:39.119 14:42:16 -- pm/common@44 -- $ pid=3283964 00:33:39.119 14:42:16 -- pm/common@50 -- $ kill -TERM 3283964 00:33:39.119 14:42:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.119 14:42:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:39.119 14:42:16 -- pm/common@44 -- $ pid=3283987 00:33:39.119 14:42:16 -- pm/common@50 -- $ sudo -E kill -TERM 3283987 00:33:39.119 + [[ -n 2691766 ]] 00:33:39.119 + sudo kill 2691766 00:33:39.130 [Pipeline] } 00:33:39.152 [Pipeline] // stage 00:33:39.159 [Pipeline] } 00:33:39.179 [Pipeline] // timeout 00:33:39.187 [Pipeline] } 00:33:39.207 [Pipeline] // catchError 00:33:39.213 [Pipeline] } 00:33:39.230 [Pipeline] // wrap 00:33:39.237 [Pipeline] } 00:33:39.263 [Pipeline] // catchError 00:33:39.274 [Pipeline] stage 00:33:39.276 [Pipeline] { (Epilogue) 00:33:39.293 [Pipeline] catchError 00:33:39.295 [Pipeline] { 00:33:39.311 [Pipeline] echo 00:33:39.313 Cleanup processes 00:33:39.319 [Pipeline] sh 00:33:39.636 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:39.636 3284065 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:39.636 3284511 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:39.652 [Pipeline] sh 00:33:39.941 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:39.941 ++ grep -v 'sudo pgrep' 00:33:39.941 ++ awk '{print $1}' 00:33:39.941 + sudo kill -9 3284065 00:33:39.955 [Pipeline] sh 00:33:40.244 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:55.165 [Pipeline] sh 00:33:55.454 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:55.454 Artifacts sizes are good 00:33:55.469 [Pipeline] archiveArtifacts 00:33:55.476 Archiving artifacts 00:33:55.669 [Pipeline] sh 00:33:55.985 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:56.003 [Pipeline] cleanWs 00:33:56.014 [WS-CLEANUP] Deleting project workspace... 00:33:56.014 [WS-CLEANUP] Deferred wipeout is used... 00:33:56.022 [WS-CLEANUP] done 00:33:56.024 [Pipeline] } 00:33:56.047 [Pipeline] // catchError 00:33:56.061 [Pipeline] sh 00:33:56.347 + logger -p user.info -t JENKINS-CI 00:33:56.358 [Pipeline] } 00:33:56.376 [Pipeline] // stage 00:33:56.383 [Pipeline] } 00:33:56.402 [Pipeline] // node 00:33:56.409 [Pipeline] End of Pipeline 00:33:56.451 Finished: SUCCESS